Kafka

Monday June 24, 2019

What's New in Apache Kafka 2.3

It’s official: Apache Kafka® 2.3 has been released! Here is a selection of some of the most interesting and important features we added in the new release.

Core Kafka

KIP-351 and KIP-427: Improved monitoring for partitions which have lost replicas
In order to keep your data safe, Kafka creates several replicas of it on different brokers. Kafka will not allow writes to proceed unless the partition has a minimum number of in-sync replicas. This is called the “minimum ISR.”

Kafka already had metrics showing the partitions that had fewer than the minimum number of in-sync replicas. In this release, KIP-427 adds additional metrics showing partitions that have exactly the minimum number of in-sync replicas. By monitoring these metrics, users can see partitions that are on the verge of becoming under-replicated.

Additionally, KIP-351 adds the --under-min-isr command line flag to the kafka-topics command. This allows users to easily see which topics have fewer than the minimum number of in-sync replicas.

KIP-354: Add a Maximum Log Compaction Lag
To a first-order approximation, previous values of a key in a compacted topic get compacted some time after the latest key is written. Only the most recent value is available, and previous values are not. However, it has always been possible to set the minimum amount of time a key would stick around before being compacted, so we don’t lose the old value too quickly. Now, with KIP-354, it’s possible to set the maximum amount of time an old value will stick around. The new parameter max.log.compation.time.ms specifies how long an old value may possibly live in a compacted topic. This can be used in complying with data retention regulations such as the GDPR.

KIP-402: Improve fairness in SocketServer processors
Previously, Kafka would prioritize opening new TCP connections over handling existing connections. This could cause problems if clients attempted to create many new connections within a short time period.KIP-402 prioritizes existing connections over new ones, which improves the broker’s resilience to connection storms. The KIP also adds a max.connections per broker setting.

KIP-461: Improve failure handling in the Replica Fetcher
In order to keep replicas up to date, each broker maintains a pool of replica fetcher threads. Each thread in the pool is responsible for fetching replicas for some number of follower partitions. Previously, if one of those partitions failed, the whole thread would fail with it, causing under-replication on potentially hundreds of partitions. With this KIP, if a single partition managed by a given replica fetcher thread fails, the thread continues handling the remainder of its partitions.

KAFKA-7283: Reduce the amount of time the broker spends scanning log files when starting up
When the broker starts up after an unclean shutdown, it checks the logs to make sure they have not been corrupted. This JIRA optimizes that process so that Kafka only checks log segments that haven't been explicitly flushed to disk. Now, the time required for log recovery is no longer proportional to the number of logs. Instead, it is proportional to the number of unflushed log segments. Some of the benchmarks which Zhanxiang Huang discusses on the JIRA show up to a 50% reduction in broker startup time.

Kafka Connect

KIP-415: Incremental Cooperative Rebalancing in Kafka Connect
In Kafka Connect, worker tasks are distributed among the available worker nodes. When a connector is reconfigured or a new connector is deployed-- as well as when a worker is added or removed-- the tasks must be rebalanced across the Connect cluster. This helps ensure that all of the worker nodes are doing a fair share of the Connect work. In 2.2 and earlier, a Connect rebalance caused all worker threads to pause while the rebalance proceeded. As of KIP-415, rebalancing is no longer a stop-the-world affair, making configuration changes a more pleasant thing.

KIP-449: Add connector contexts to Connect worker logs
A running Connect cluster contains several different thread pools. Each of these threads emits its own logging, as one might expect. However, this makes it difficult to untangle the sequence of events involved in a single logical operation, since the parts of that operation are running asynchronously in their various threads across the cluster. This KIP adds some context to each Connect log message, making it much easier to make sense of the state of a single connector over time.

Kafka Streams

KIP-258: Allow Users to Store Record Timestamps in RocksDB
Prior to this KIP, message timestamps were not stored in the Streams state store. Only the key and value were there. With this KIP, timestamps are now included in the state store. This KIP lays the groundwork to enable future features like handling out-of-order messages in KTables and implementing TTLs for KTables.

KIP-428: Add in-memory window store / KIP-445: Add in-memory Session Store
These KIPs add in-memory implementations for the Kafka Streams window store and session store. Previously, the only component with an in-memory implementation was the state store. The in-memory implementations provide higher performance, in exchange for lack of persistence to disk. In many cases, this can be a very good tradeoff.

KIP-313: Add KStream.flatTransform and KStream.flatTransformValues
The first half of this KIP, the flatTransform() method, was delivered in Kafka 2.2. The flatTransform() method is very similar to flatMap(), in that it takes a single input record and produces one or more output records. flatMap() does this in a type-safe way but without access to the ProcessorContext and the state store. We’ve been able to use the Processor API to perform this same kind of operation with access to the ProcessorContext and the state store, but without the type safety of flatMap(). flatTransform() gave us the best of both worlds: processor API access, plus compile-time type checking.

flatTransformValues(), just introduced in the completed KIP-313 in Kafka 2.3, is to flatTransform() as flatMapValues() is to flatMap(). It lets us do processor-API-aware computations that return multiple records for each input record without changing the message key and causing a repartition.

Conclusion

Thanks for reading! Check out the release notes for more details about all the great stuff in Kafka 2.3. Also, check out this YouTube video on the highlights of the release!

Comments:

this is really amazing to know. I really appreciate your efforts.

Posted by My Wifiext Local on June 28, 2019 at 12:11 PM UTC #

Thanks for the information on this. I really enjoy the write up.

Posted by tree trimming on June 28, 2019 at 01:09 PM UTC #

This is a very good article, I like it very much and allow me to continue to follow. And visit my site too <a href="http://www.play.asiabet4d.id/"> Taruhan Togel Online Terpercaya</a> thanks you bruhh.

Posted by Asiabet4d on July 03, 2019 at 07:33 PM UTC #

This is a very good article, i like it very much and allow me to continue to follow. And visit my site too <a href="http://bit.ly/2IIpcc8/">Taruhan Togel Online Terpercaya</a> thanks you bruhh.

Posted by Asiabet4d on July 04, 2019 at 06:46 PM UTC #

This is a very good article, i like it very much and allow me to continue to follow. And visit my site too <a href="http://bit.ly/2IIpcc8/">Taruhan Togel Online Terpercaya</a> thanks you bruhh.

Posted by Asiabet4d on July 04, 2019 at 06:47 PM UTC #

This is a very good article, i like it very much and allow me to continue to follow. And visit my site too <a href="https://bit.ly/2AchmBW">Agen Bola Online Terpercaya</a> thanks you bruhh.

Posted by TULISTOTO on July 08, 2019 at 12:02 AM UTC #

This is a very good article, i like it very much and allow me to continue to follow. And visit my site too <a href="https://bit.ly/2CD83iY">Perisakti Prediksi Togel dan Bola Paling Jitu</a> thanks you bruhh.

Posted by TULISTOTO on July 08, 2019 at 12:03 AM UTC #

Excellent and thank you

Posted by John Simpson on July 08, 2019 at 06:58 PM UTC #

Good information and very helpful

Posted by Trevor Jones on July 08, 2019 at 06:59 PM UTC #

A lot to take in but slowly digesting to get the full affect

Posted by James Barker on July 08, 2019 at 07:01 PM UTC #

Well laid out and difficult read but got there in the end. Many thanks

Posted by Robert James on July 08, 2019 at 07:03 PM UTC #

if you have an arlo camera and you want to update your arlo camera features then visit this website to know more about arlo camera:- <a href="https://arlologin.org/"> Arlo Netgear </a>

Posted by Arlo Camera on July 09, 2019 at 05:38 AM UTC #

Apache Kafka is probably the most useful when it comes to big data solution. While everyone else is moving to cloud, Apache kafka streamlines our big data. This feature will surely help us.

Posted by https://www.americanelectricofjacksonville.com/ on July 12, 2019 at 06:31 AM UTC #

Post a Comment:
  • HTML Syntax: NOT allowed

Calendar

Search

Hot Blogs (today's hits)

Tag Cloud

Categories

Feeds

Links

Navigation