Controls the amount of memory in bytes used for batching records awaiting transmission. Larger batches lead to higher throughput but might increase latency.
How long to wait for additional messages before sending a batch. Higher values increase throughput at the cost of latency.
Total memory the producer can use to buffer records waiting to be sent to the server.
Maximum number of unacknowledged requests the client will send before blocking. Higher values increase throughput but can affect ordering with retries.
Compression algorithm applied to batches. Compression reduces network bandwidth usage but uses CPU resources.
Controls durability guarantees. "all" ensures all replicas acknowledge writes (slowest but most durable), "1" waits for just the leader (faster), "0" doesn't wait for acknowledgment (fastest but least durable).