[Webinar] From Fire Drills to Zero-Loss Resilience | Register Now
Data infrastructure growth has a direct, measurable relationship with energy consumption. As organizations ingest more events, retain more data, and deploy more always-on services, infrastructure energy use increases—often faster than business value. For streaming systems, this effect can be amplified by long-running clusters, peak-based sizing, and duplicated pipelines.
Sustainability in this context is not about environmental reporting or corporate commitments. It’s about engineering efficiency.
Ready to start streamlining resource consumption and costs in your Apache Kafka® environment? Try serverless Apache Kafka on Confluent Cloud.
Inefficient data architectures consume more compute, storage, and network resources than necessary, leading to higher cloud spend, lower utilization, and a larger carbon footprint. In heavily regulated industries—such as finance, healthcare, energy, and telecommunications—these inefficiencies are harder to detect because systems are intentionally over-provisioned and underutilized for reliability and compliance.
From an engineering perspective, sustainable real-time systems:
Avoid idle compute and unused capacity
Scale with actual workload, not theoretical peak
Minimize redundant data movement and reprocessing
Retain only the data that delivers ongoing value
These practices improve system reliability and predictability while reducing both cost and energy usage. Sustainability, therefore, becomes a byproduct of good systems design rather than a separate goal.
As data volumes increase, infrastructure often scales in a linear or super-linear way due to conservative sizing and inefficient processing models.
More data ingested → larger clusters and higher baseline compute
Peak-based provisioning → long periods of idle capacity
Batch recomputation → repeated energy-intensive jobs
Over-retention → growing storage and replication overhead
Over time, these effects compound, making both operational cost and energy usage difficult to control.
For teams early in their Kafka journeys, architectural decisions made at the beginning—such as cluster sizing, retention defaults, and pipeline design—have long-term consequences. Retrofitting sustainability later often requires disruptive migrations or reprocessing large volumes of data.
Designing energy-efficient streaming systems early helps:
Keep regulated workloads predictable and auditable
Support growth without proportional (or even exponential) increases in energy use
Establish a foundation for green operations (GreenOps) data pipelines
GreenOps, in the context of streaming systems, is an engineering discipline focused on designing and operating data pipelines that minimize waste while meeting reliability, latency, and compliance requirements.
This approach isn’t a replacement for cloud financial operations (FinOps) or platform operations. Instead, it complements them by focusing on how efficiently systems use the resources they already consume.
For real-time data platforms, GreenOps treats sustainability as a systems design problem. The goal is to maximize useful work per unit of compute, storage, and network input/output (I/O)—thereby lowering both operational cost and the carbon footprint of cloud data infrastructure.
In practice, GreenOps data pipelines emphasize:
High-resource utilization rather than idle capacity
Right-sized, elastic infrastructure instead of peak-based provisioning
Intentional data retention and reuse
Continuous measurement of efficiency signals
This is particularly relevant for building sustainable streaming architectures in regulated environments. In these sectors, systems are expected to run continuously, retain data for long periods, and support auditability without excessive duplication. Inefficiencies can persist unnoticed for long periods.
GreenOps is often confused with cost optimization or sustainability reporting. The distinction is important:
Discipline | Primary Focus | Typical Questions |
FinOps | Spend visibility and allocation | “Who is paying for this?” |
Cost optimization | Reduction of cloud bills | “How do we spend less?” |
GreenOps | Resource efficiency and waste reduction | “Are we using resources effectively?” |
GreenOps outcomes may reduce cost, but that’s a side effect, not the primary objective. The core metric is efficiency, not budget adherence.
Streaming platforms already align well with GreenOps principles when designed intentionally:
Continuous processing avoids large batch spikes.
Incremental computation reduces repeated work.
Publish-once, consume-many models limit data duplication.
Event-driven architectures favor elasticity over static sizing.
However, these benefits materialize only when streaming architectures are designed for efficiency, not when batch-era assumptions are carried forward into always-on systems.
Streaming platforms and engines like Apache Kafka® are often assumed to be efficient by default because they process data incrementally. In practice, many sustainable real-time systems fall short due to architectural decisions that introduce persistent, hidden waste. These inefficiencies are rarely visible in application-level metrics and tend to accumulate over time.
Below are the most common sources of energy and resource waste in streaming architectures built with Kafka.
Always-on clusters sized for peak load: Clusters are frequently provisioned for worst-case throughput or incident scenarios. During normal operation, this leaves large amounts of compute idle while still consuming energy.
Peak-based partitioning strategies: Topic partition counts are often set based on projected future scale. This increases broker overhead, replication traffic, and memory usage even when traffic is low.
Duplicate pipelines for similar use cases: Multiple teams ingest the same source data into separate systems, leading to redundant compute, storage, and network I/O.
Long retention defaults without access patterns: Topics retain data far longer than needed for operational or analytical value, increasing storage footprint and replication costs.
Excessive replication and cross-region mirroring: High replication factors and unnecessary geo-replication amplify storage and network energy usage, particularly when data is rarely consumed.
Unused or idle consumers: Consumer groups that no longer serve an active application continue to poll topics, generating background load with no business value.
Batch-style backfills running on streaming infrastructure: Large historical reprocessing jobs executed on always-on streaming clusters create short-term spikes that require permanent over-provisioning.
Each of these patterns increases resource consumption without increasing useful work, directly impacting the carbon footprint of cloud data platforms.
Hidden waste in streaming systems often goes unnoticed because:
Systems remain functionally correct and meet service level agreements (SLAs)
Costs are distributed across shared infrastructure
Energy usage is not directly observable at the pipeline level
Regulated environments favor safety margins over efficiency
As a result, inefficient Kafka architectures can operate for years without triggering reliability or cost alarms—all while steadily consuming more compute and energy than required.
From a GreenOps perspective, the issue is rarely streaming itself. The problem is applying batch-era assumptions to real-time systems:
Static capacity instead of elastic scaling
Retain-everything policies instead of value-based retention
Isolated pipelines instead of shared event streams
Correcting these patterns requires a fundamental architectural shift, not a tuning exercise.
A sustainable streaming architecture isn’t defined by a single technology or configuration. It emerges when you consistently apply a small set of GreenOps design principles that reduce waste, improve utilization, and keep systems adaptable as workloads evolve.
For teams early in their Kafka journeys, adopting these principles from the start prevents many of the inefficiencies that later require disruptive re-architecture. These benefits apply regardless of your organization’s industry, scale, or deployment model.
Principle | Why It Matters | Streaming Practice |
Design for utilization. | Idle resources consume energy without producing value. | Right-size clusters, avoid peak-only sizing, and monitor broker and consumer utilization. |
Prefer incremental over recompute. | Reprocessing large datasets multiplies compute and energy usage. | Use continuous stream processing instead of recurring batch jobs. |
Share streams, not pipelines. | Duplicate ingestion and enrichment increase network and compute load. | Publish once; allow multiple consumer groups to reuse the same topics. |
Retain data intentionally. | Long retention amplifies storage, replication, and recovery cost. | Set retention based on access patterns, not defaults. |
Scale elastically. | Static capacity leads to chronic underutilization. | Autoscale consumers and processing layers with demand. |
Minimize data movement. | Network I/O is a significant energy consumer. | Filter, aggregate, or enrich data as close to the source as possible. |
Make efficiency observable. | Waste can’t be reduced if it’s not visible. | Track utilization, lag, throughput, and storage growth as first-class metrics. |
These principles focus on doing less unnecessary work, which is the most direct way to reduce both cost and carbon footprint in cloud data systems. Now, let’s take a closer look at three of these design principles.
Architectures that introduce artificial spikes—such as batch backfills or scheduled recomputations—force clusters to be sized for peak demand, increasing idle capacity during normal operation.
Sustainable real-time systems favor:
Continuous ingestion and processing
Predictable resource usage
Smaller variance between average and peak load
Data retention is often configured once and forgotten. In regulated environments, teams should avoid retain-everything policies that exceed actual access or audit requirements.
From a GreenOps perspective, retention:
Determines long-term storage and replication cost
Affects recovery time and energy usage during rebalancing
Should reflect how data is used, not how easy it is to store
When done right, implementing strategic retention policies can be one of the highest-impact green data engineering practices.
Efficient Kafka architectures encourage reuse:
A single ingestion pipeline serves multiple downstream use cases.
Enrichment happens once and is shared.
Schemas are governed to support compatibility over time.
This reduces duplicate compute and aligns with resource-efficient pipelines.
By grounding system design in these principles, teams create low-carbon data architectures that scale with demand rather than with assumptions. The next section builds on this foundation by examining a common misconception regarding whether batch or streaming is more sustainable in practice.
At first glance, batch processing may appear more energy-efficient because compute resources are active only during scheduled jobs. In practice, many batch-only architectures create larger energy spikes, more redundant work, and higher baseline provisioning than well-designed batch + streaming systems.
From a GreenOps perspective, the question is not streaming versus batch in isolation but how work is distributed over time and how much work is repeated.
Traditional batch architectures rely on periodic recomputation. As data volumes grow, this model introduces several inefficiencies.
Large compute spikes: Batch jobs require short periods of very high capacity, forcing infrastructure to be sized for peak demand.
Repeated full scans: Each run often reprocesses large portions of historical data, even when only a small fraction has changed.
Duplicated ETL pipelines: Similar transformations are implemented multiple times across different jobs.
Idle infrastructure between runs: Capacity that’s reserved to handle batch peaks sits unused most of the time.
These patterns increase total energy consumption, even if systems appear idle outside of scheduled windows.
Streaming systems process data incrementally as events arrive. When designed correctly, this model aligns naturally with energy-efficient streaming.
Incremental computation: Only new or changed data is processed, eliminating repeated work.
Lower peak-to-average ratio: Continuous processing reduces the need for extreme burst capacity.
Reuse of event streams: A single stream can serve multiple consumers without re-ingestion.
Faster feedback loops: Issues are detected earlier, reducing the need for large-scale reprocessing.
These characteristics make streaming architectures well suited for sustainable real-time systems, especially when data freshness is required.
Streaming and Batch Compared for GreenOps
Dimension | Batch Processing | Streaming Processing |
Compute pattern | Periodic spikes | Continuous, steady |
Reprocessing | Frequent full recompute | Incremental only |
Capacity planning | Peak-based | Demand-driven |
Data duplication | Common | Minimized |
Energy efficiency | Low at scale | Higher when well designed |
Suitability for real time | Limited | Native |
This comparison highlights a key insight: Energy efficiency improves when systems avoid unnecessary repetition and extreme peaks.
Always-on infrastructure is not inherently wasteful. Waste arises when always-on systems are:
Over-provisioned
Poorly utilized
Duplicated across teams
Integrating a right-sized, elastic streaming platform with high utilization can consume less total energy than using a batch system alone that repeatedly spins up large clusters for recomputation.
Streaming is not a universal replacement for batch processing, which remains appropriate for:
One-time historical backfills
Large, infrequent analytical jobs
Offline model training with historical data
However, when batch is used to compensate for missing streaming pipelines, it often introduces avoidable inefficiency.
Sustainable streaming architecture is achieved through repeatable design patterns that reduce unnecessary work across compute, storage, and network layers. These patterns are implementation-agnostic and apply to most real-time platforms, including Kafka-based systems, without relying on vendor-specific features.
The goal is simple: Reduce the amount of infrastructure required to deliver the same outcomes.
Pattern 1: Elastic or Serverless Processing
Scale compute with demand instead of provisioning for peak.
Impact: Less idle capacity, higher utilization, lower energy use.
Pattern 2: Tiered Storage (Hot/Warm/Cold)
Store data based on access frequency, not defaults.
Impact: Reduced storage footprint and replication energy.
Pattern 3: Publish Once, Consume Many
Reuse shared event streams across multiple consumers.
Impact: Eliminates duplicate ingestion and processing.
Pattern 4: Incremental Processing
Process only new events instead of recomputing history.
Impact: Fewer compute cycles and lower peak demand.
Pattern 5: Compacted Topics for State
Retain only the latest value per key for reference data.
Impact: Faster recovery and significantly lower storage I/O.
Pattern 6: Autoscaling Consumers
Adjust consumer concurrency based on lag or throughput.
Impact: Prevents over-parallelization and wasted compute.
Pattern 7: Right-Sized Partitioning
Partition for realistic throughput, not theoretical growth.
Impact: Lower broker overhead and background resource usage.
Pattern 8: Edge Filtering and Early Aggregation
Reduce data volume before central processing.
Impact: Lower network traffic and downstream compute load.
Improving the sustainability of a streaming system doesn’t require a platform rewrite. Most gains come from systematic, incremental changes that reduce waste across compute, storage, and network layers. The following steps form a practical GreenOps checklist for teams building or operating Kafka-based pipelines.
This is an architecture-first process designed for both early-stage and growing streaming deployments.
Before making changes, establish a baseline for how efficiently your system is operating. Focus on:
Broker CPU, memory, and disk utilization
Consumer lag versus throughput
Average versus peak load over time
Storage growth per topic
Low utilization is the strongest signal of hidden energy waste.
Look for resources that remain allocated without consistent work:
Underutilized brokers
Always-on processing jobs with low input rates
Consumer groups with near-zero lag for extended periods
These components consume energy continuously, even when idle.
Audit your streaming topology for redundant work:
Multiple ingestion pipelines from the same source
Repeated enrichment or transformation logic
Separate topics carrying identical data
Consolidation reduces compute, storage, and network usage immediately.
Shift from static capacity to demand-driven scaling:
Autoscale consumers based on lag or throughput.
Scale processing jobs independently of ingestion.
Avoid fixed parallelism where traffic is variable.
Elasticity is a core requirement for energy-efficient streaming.
Review topic retention with actual access patterns:
Shorten retention for operational streams.
Move historical data to lower-energy storage tiers.
Avoid retaining data “just in case.”
Retention decisions have long-term sustainability impact.
Reduce data volume in motion:
Prefer compact binary serialization formats.
Enable compression appropriate to your workload.
Smaller messages reduce network I/O, disk usage, and processing overhead.
Separate storage based on performance needs:
Keep hot data close to compute.
Offload cold data to lower-cost, lower-energy tiers.
This reduces storage replication and recovery energy costs.
Sustainable real-time systems rely on visibility:
Monitor utilization (in context of carbon intensity), not just errors and latency.
Track storage growth per stream.
Correlate throughput with resource consumption.
Measurement ensures that improvements are sustained over time.
GreenOps is effective only when efficiency is measurable. Traditional streaming metrics focus on correctness and latency; GreenOps adds signals that reveal waste, utilization, and long-term sustainability. These metrics should be tracked continuously and reviewed alongside reliability indicators.
The goal is not to introduce new tooling but to reinterpret existing observability data through an efficiency lens.
Metric | What It Shows | Why It Matters for Sustainability |
Compute utilization (%) | How much allocated CPU and memory is doing useful work | Low utilization indicates idle capacity and unnecessary energy use |
Average vs peak throughput
| Traffic variability over time | Large gaps signal peak-based over-provisioning |
Consumer lag stability | Processing efficiency relative to input rate | Stable lag with lower concurrency indicates efficient scaling |
Storage growth per topic | Long-term data accumulation | Identifies over-retention and unused data |
Replication overhead | Extra storage and network cost of replicas | High replication multiplies energy use |
Partition-to-throughput ratio | Partition efficiency | Excess partitions increase background overhead |
Idle consumer groups | Consumers with no meaningful work | Silent source of compute waste |
Reprocessing frequency | How often data is recomputed | Frequent recompute increases energy and peak demand |
Network I/O per event | Data movement efficiency | High I/O amplifies carbon footprint |
These metrics collectively describe how resource-efficient pipelines behave over time.
GreenOps doesn’t introduce new key performance indicators (KPIs); it reframes existing ones:
Healthy systems show high utilization with stable latency.
Wasteful systems show low utilization with high provisioned capacity.
Sustainable real-time systems minimize variance between average and peak usage.
A system that is fast but poorly utilized is not efficient.
Certain metric combinations are strong waste indicators:
Low throughput + high broker count → over-provisioned clusters
Stable lag + high consumer concurrency → excess parallelism
Fast storage growth + low read rates → over-retention
High replication + low consumption → unnecessary redundancy
These patterns help teams prioritize architectural fixes over tuning.
Sustainable streaming architecture isn’t theoretical. Many regulated industries already apply GreenOps patterns—often unintentionally—to reduce waste while meeting strict reliability, latency, and compliance requirements. Below are representative examples showing scenario → problem → efficiency gain.
If thousands of devices emit high-frequency telemetry data continuously, you'll often deal with:
Raw events ingested at full fidelity
Duplicate pipelines for monitoring, analytics, and alerting
Long retention of unused raw data
The solution:
Edge filtering and early aggregation reduced event volume
Shared event streams reused across teams
Tiered storage applied to historical telemetry
As a result, you’ll see lower network I/O, reduced broker load, and significantly slower storage growth.
If streaming transactions are analyzed in real time to detect fraud, companies often use:
Batch backfills to recompute fraud signals
Over-provisioned processing to handle worst-case spikes
Repeated enrichment across pipelines
Instead, you can build systems with:
Incremental stream processing to replace recomputation
Autoscaling consumers that align compute with transaction rate
Enrichment performed once and reused
That allows you to lower peak capacity requirements with improved detection latency.
Retail and ecommerce companies often take user interaction events to drive personalization and analytics. Inefficiencies occur when there are:
Separate ingestion paths for analytics, marketing, and experimentation
Over-retention of fine-grained click data
Excessive partitioning for future growth
Instead, your team should implement:
Publish-once, consume-many architecture
Retention shortened for operational streams
Partition counts right-sized to actual throughput
This approach allows you to reduce compute and storage overhead without loss of analytical value.
Telecommunications companies often use streaming data from meters and sensors to support grid stability and compliance. Because of strict performance requirements, these organizations often use:
Always-on clusters sized for rare peak events
High replication for all data regardless of access frequency
Centralized processing of all raw signals
These examples reinforce a key GreenOps insight: Sustainable real-time systems emerge from architectural choices, not industry-specific optimizations. You can lower baseline energy consumption while maintaining auditability by implementing:
Elastic processing layers scaled with load
Tiered storage separated by operational versus archival data
Early aggregation that reduces downstream processing
Many teams adopt streaming with the expectation that it’s inherently efficient. In practice, unsustainable patterns often emerge from architectural shortcuts, legacy assumptions, or operational convenience.
These pitfalls reduce utilization, increase energy consumption, and undermine GreenOps goals—even when systems appear reliable. Avoiding these issues early is often more impactful than adding new optimizations.
Over-retention by default: Retaining data far longer than it’s accessed or required increases storage, replication, and recovery energy. Retention should reflect usage patterns, not convenience.
Too many small clusters: Isolated clusters per team or use case lead to low utilization and duplicated infrastructure. Consolidation often yields immediate efficiency gains.
Batch backfills on streaming infrastructure: Large historical reprocessing jobs create short-lived spikes that force permanent over-provisioning of always-on systems.
Duplicate enrichment and transformation jobs: Recomputing the same logic in multiple pipelines multiplies compute and network usage without adding value.
Ignoring idle consumers: Consumer groups with little or no traffic continue to poll and allocate resources, quietly consuming energy over time.
Partitioning for hypothetical future scale: Excessive partitions increase broker overhead, memory usage, and idle task slots long before they’re needed.
Lift-and-shift batch architectures: Migrating batch-era designs directly into streaming systems preserves inefficiencies rather than eliminating them.
These patterns often survive because:
Systems meet latency and availability SLAs
Waste is distributed across shared infrastructure
Energy usage isn’t tracked at the pipeline level
Teams optimize locally rather than system-wide
Sustainability improves when less unnecessary work is designed into the system.
As a result, inefficient Kafka architectures can remain undetected until scale magnifies their impact.
From a GreenOps standpoint, the corrective action is usually architectural:
Replace duplication with shared streams.
Replace static capacity with elastic scaling.
Replace default retention with intentional policies.
Replace recomputation with incremental processing.
Designing sustainable streaming architectures involves deliberate trade-offs. Reducing energy usage and resource waste can affect latency, isolation, and predictability if applied without context. GreenOps does not eliminate trade-offs. It makes them explicit and measurable.
The goal is not maximum efficiency at all costs but intentional balance.
Key Trade-Offs in Green Streaming Design
Tradeoff | Impact | Mitigation Strategy |
Cost vs latency | Aggressive consolidation or scaling down may increase tail latency. | Isolate truly latency-critical workloads; right-size others. |
Retention vs analytical flexibility | Shorter retention limits ad hoc analysis. | Tier older data to cold storage instead of deleting. |
Consolidation vs isolation | Shared clusters increase blast radius. | Use quotas, namespaces, and governance controls. |
Autoscaling vs predictability | Dynamic scaling introduces variability. | Set scaling bounds and conservative thresholds. |
Fewer partitions vs parallelism | Lower partitions may cap throughput. | Repartition incrementally as demand grows. |
Early filtering vs future use cases | Discarded data can’t be recovered. | Preserve aggregates or sampled data instead of raw events. |
These trade-offs highlight why sustainability is a systems design problem, not a tuning exercise.
GreenOps optimizations typically move systems along three axes:
Efficiency – higher utilization, less waste
Reliability – consistent performance under load
Flexibility – ability to support future use cases
Pushing too far on any single axis creates risk elsewhere because the most sustainable real-time systems maintain a stable equilibrium.
Efficiency-first decisions are usually appropriate when:
Workloads are predictable or steady
Data is rarely reprocessed
Systems are over-provisioned today
Energy or infrastructure constraints are explicit
Additional capacity and redundancy are justified when:
Regulatory requirements mandate strict isolation
Traffic patterns are highly bursty
Latency service-level objectives (SLOs) are extremely tight
Failure impact is severe
GreenOps doesn’t remove safety margins; it ensures that they’re intentional rather than accidental.
GreenOps complements most existing operational practices by focusing on how efficiently systems use resources rather than how much those resources cost or how teams are organized around them.
For streaming platforms, GreenOps often sits naturally between FinOps and platform engineering.
Discipline | Primary Focus | Streaming Context |
FinOps | Financial visibility and accountability | Understand spend across clusters, teams, and environments |
GreenOps | Resource efficiency and waste reduction | Improve utilization, reduce recompute, and minimize idle capacity |
Platform Engineering | Enablement and standardization | Provide reusable, efficient streaming primitives |
This separation ensures that sustainable streaming architecture remains an engineering concern, not a budgeting exercise.
FinOps asks, “What does this streaming system cost?” And GreenOps answers, “Is this streaming system doing unnecessary work?”
In practice, this looks like:
FinOps identifying where spend is increasing
GreenOps explaining why consumption is inefficient
Architectural fixing to reduce both energy usage and cost
This keeps GreenOps focused on efficiency patterns, not pricing or budget processes.
Often, platform teams are the primary enablers of GreenOps practices in streaming systems by:
Providing shared, right-sized clusters
Offering autoscaling consumer templates
Enforcing schema compatibility and stream reuse
Standardizing observability for utilization metrics
When efficiency is built into the platform, individual teams don’t need to solve it repeatedly.
GreenOps creates a continuous feedback loop across disciplines:
Platform engineering supplies efficient defaults.
Application teams build on shared streams.
GreenOps metrics highlight waste and inefficiency.
Architectural adjustments improve utilization.
FinOps reporting reflects lower spend as a result.
Efficiency improvements flow upstream into cost visibility without requiring additional governance layers.
FinOps manages what you spend. Platform engineering defines how teams build. GreenOps ensures that systems do less unnecessary work.
Together, they enable low-carbon, energy-efficient streaming systems that scale responsibly without adding operational complexity.
Start building a more efficient, scalable, and sustainable streaming architecture with elastic, autoscaling Kafka clusters on Confluent Cloud.
Well-designed streaming systems process data incrementally and avoid repeated recomputation, which reduces peak capacity needs and total compute cycles compared to batch-heavy architectures.
Teams commonly see reductions in idle compute, storage growth, and reprocessing after consolidating pipelines, enabling autoscaling, and tightening retention. The exact impact depends on baseline utilization and workload variability.
It doesn’t when configured intentionally. Setting conservative scaling bounds and using lag-based signals allow systems to scale safely while avoiding unnecessary over-provisioning.
No. Most GreenOps insights come from existing observability data—utilization, throughput, lag, and storage growth—viewed through an efficiency lens rather than a purely operational one.
Start by tracking proxy metrics such as utilization, storage footprint, and network I/O. These directly correlate with energy consumption and are more actionable than abstract emissions estimates.
No. Early architectural decisions—partitioning, retention, stream reuse—have long-term efficiency consequences. Applying GreenOps early prevents waste from compounding as systems grow.
Apache®, Apache Kafka®, and Kafka® are registered trademarks of the Apache Software Foundation. No endorsement by the Apache Software Foundation is implied by the use of these marks.
Learn how real-time decisioning and autonomous data systems enable organizations to act on data instantly using streaming, automation, and AI.
Learn how to safely break off your first microservice from a monolith using a low-risk, incremental migration approach.