Confluent Q1 ’22 improves elasticity for real-time business demands

Confluent announced the launch of Confluent Q1 ’22, which includes new additions to the portfolio of fully managed data streaming connectors, new controls for cost-effective scaling of massive throughput Apache Kafka clusters, and new functionality for help maintain reliable data quality in global environments.

These innovations help enable a simple, scalable, and reliable flow of data across the enterprise, so any organization can deliver the real-time operations and customer experiences needed to succeed in a digitally driven world.

Because of the way we now consume data, businesses and customers increasingly expect immediate and relevant experiences and communications,” said Amy Machado, research manager, Streaming Data Pipeline, IDC in Continuous Worldwide IDC’s Analytics Software Forecast, 2021-2025. “Real-time data streams and processing will become the norm, not the exception.”

However, for many organizations, real-time data remains out of reach. Data lives in silos, trapped in different systems and applications, as integrations take months to build and significant resources to manage.

Additionally, scaling streaming capacity to meet ever-changing business needs is a complex process that can result in excessive infrastructure spending. Finally, ensuring data quality and compliance on a global scale is a complicated technical feat, usually requiring close coordination between teams of Kafka experts.

“The real-time operations and experiences that set organizations apart in today’s economy require ubiquitous data in motion,” said Ganesh Srinivasan, chief product officer, Confluent. “In an effort to help any organization put their data in motion, we’ve designed the easiest way to connect data flows between applications and critical business systems, ensuring they can scale quickly to meet changing immediate business needs and maintain confidence in the quality of their data globally.”

Introducing new additions to Confluent’s cloud-native data streaming platform

With these latest innovations now generally available, Confluent continues to deliver on its vision to provide customers with a complete, cloud-native, anywhere data streaming platform.

Complete: Additions to 50+ fully managed connectors designed by experts modernize applications with real-time data pipelines

Confluent’s newest connectors include Azure Synapse Analytics, Amazon DynamoDB, Databricks Delta Lake, Google BigTable, and Redis for expanded coverage of popular data sources and destinations.

“Running the largest online marketplace for independent creators requires data in motion to better serve our users,” said Joe Burns, CIO, TeePublic. “We are a small team responsible for making real-time user interaction data available in our data warehouse and data lake for immediate analysis and insights. Using the fully managed Amazon S3 sink, Elasticsearch sink, Salesforce CDC source, and Snowflake sink connectors, we were able to quickly and easily create high-performance streaming data pipelines that connect our business through Confluent Cloud without any operational overhead. , thus accelerating our overall project. chronology.”

Available only on Confluent Cloud, Confluent’s portfolio of 50+ fully managed connectors helps enterprises build powerful streaming applications and improve data portability. These connectors, designed with Confluent’s deep Kafka expertise, provide organizations with an easy way to modernize data warehouses, databases, and data lakes with real-time data pipelines:

  • Data warehouse connectors: Snowflake, Google BigQuery, Azure Synapse Analytics, Amazon Redshift
  • Database connectors: MongoDB Atlas, PostgreSQL, MySQL, Microsoft SQL Server, Azure Cosmos DB, Amazon DynamoDB, Oracle Database, Redis, Google BigTable
  • Data Lake Connectors: Amazon S3, Google Cloud Storage, Azure Blob Storage, Azure Data Lake Storage Gen 2, Databricks Delta Lake

To simplify real-time visibility into application and system health, Confluent announced integrations with Datadog and Prometheus. With just a few clicks, operators have deep, end-to-end visibility into Confluent Cloud within the monitoring tools they already use. This provides an easier way to identify, resolve, and avoid issues that may arise while returning valuable time to whatever their job requires.

“Delivering a top-notch experience to landlords and housing professionals around the world requires a holistic view of our business,” said Mustapha Benosmane, Product Manager, ADEO. “Confluent’s integration with Datadog quickly syncs real-time data streams with our monitoring tool of choice, with no operational complexity or middleware required. Our teams now have visibility into the health of all our systems for reliable, always-on services. »

native cloud: New controls to expand and contract GBps+ cluster capacity improve elasticity for real-time dynamic business demands

To ensure that services always remain available, many enterprises are forced to over-provision the capacity of their Kafka clusters, paying a premium price for excess infrastructure that often goes unused. Confluent solves this common problem with dedicated clusters that can be provisioned on demand with just a few clicks and include self-service controls to add and remove capacity at GBps+ throughput scale.

Capacity is easy to adjust at any time through the Confluent Cloud UI, CLI, or API. With automatic data balancing, these clusters constantly optimize data placement to balance the load without additional effort. Additionally, minimum capacity guarantees prevent clusters from being scaled down below what is needed to support active traffic.

“Ensuring that our e-commerce platform is always up and available is a complex, cross-functional and costly effort, especially given the major fluctuations in online traffic we see throughout the year,” said said Cem Küççük, Senior Manager, Product Engineering, Hepsiburada. . “Our teams are challenged to deliver the exact capacity we need at all times without over-provisioning expensive infrastructure. With a self-service way to scale and scale cloud-native Apache Kafka clusters, Confluent enables us to deliver a real-time experience to every customer with operational simplicity and cost effectiveness.

Combined with Confluent’s new Load Metric API, organizations can make informed decisions on when to scale and scale capacity with a real-time view of their cluster utilization. With this new level of elastic scalability, enterprises can run their high-throughput workloads with high availability, operational simplicity, and cost effectiveness.

All over: New Schema Binding Ensures Reliable and Compatible Data Flows in Cloud and Hybrid Environments Worldwide

“As enterprises begin to adopt event streaming more widely, event data sharing is both more important and more commonplace,” according to Maureen Fleming, IDC program vice president, Intelligent Process Automation. “Features such as schema binding enable faster adoption, lower costs, and greater confidence in leveraging the data flowing through an enterprise.”

Global data quality checks are critical to maintaining a highly compatible Kafka deployment suitable for long-term standardized use across the organization. With Schema Linking, enterprises now have an easy way to maintain reliable data flows across cloud and hybrid environments with shared schemas that sync in real time.

Combined with cluster bonding, schemas are shared wherever needed, providing an easy way to maintain high data integrity while deploying use cases including global data sharing, cluster migrations, and preparations for real-time failover in the event of disaster recovery.

Back To Top