Entries tagged [data]

Thursday March 21, 2019

The Apache Software Foundation Announces Apache® Unomi™ as a Top-Level Project

Powerful Open Source Customer Data Platform in use at Al-Monitor, Altola, Jahia, and Yupiik, among others. 

Wakefield, MA —21 March 2019— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today Apache® Unomi™ as a Top-Level Project (TLP).

Apache Unomi is a standards-based, Customer Data Platform (CDP) that manages online customer, leads, and visitor information to provide personalized experiences that adheres to visitor privacy rules such as GDPR and “Do Not Track” preferences. The project was originally developed at Jahia, and was submitted to the Apache Incubator in October 2015.

"I am truly thankful to our community, especially our mentors, who have helped us achieve this milestone," said Serge Huber, Vice President of Apache Unomi. "The original vision behind Unomi was to ensure true privacy by making the technologies handling customer data completely Open Source and independent. Since it was submitted to the Apache Incubator, developing Unomi using the Apache Way will ensure the project grows its community to be more diverse and welcome new users and developers."

Apache Unomi is versatile, and features privacy management, user/event/goal tracking, reporting, visitor profile management, segmentation, personas, A/B testing, and more. It can be used as:

  • a personalization service for a Web CMS;

  • an analytics service for  native mobile applications;

  • a centralized profile management system with segmentation capabilities; and

  • a consent management hub

Apache Unomi is the industry's first reference implementation of the upcoming OASIS CDP specification (established by the OASIS CXS Technical Committee, which sets standards as a core technology for enabling the delivery of personalized user experiences). As a reference implementation, Apache Unomi serves as a real world example of how the standard will be stable, and is quickly gaining traction by those interested in truly open and transparent customer data privacy. Apache Unomi is in use at organizations such as Al-Monitor, Altola, Jahia, Yupiik, and many others to create and deliver consistent personalized experiences across channels, markets, and systems.

"When Serge and I announced the launch of the Apache Unomi project at the 2015 ApacheCon Budapest, Apache Unomi, at that time, was the first proposal among the rising Customer Data Platform industry's segment, positioned as an 'ethical data-driven marketing' product that would respect the privacy of customers while leveraging the power of unified customers data," said Elie Auvray, Head of Business Development at Jahia. "Jahia's digital experience management solutions are based on Apache Unomi, and we can't wait to see how the project will now evolve with its growing community. Seeing today Apache Unomi becoming a Top-Level Project is a great reward for us as Open Source software believers. We are proud of this milestone, grateful to the Apache Software Foundation and our mentors, and we know it's only the beginning of a new –hopefully long and successful– journey."

"Under development at OASIS, the Customer Data Platform specification –for which Apache Unomi aims to be the reference implementation– lies at the crossroads of many solutions providers needs such as WCM, CRM, Big Data Platforms, Machine Learning, IoT and Digital Marketing," said Laurent Liscia, CEO of OASIS. "At a time when client data interoperability and built-in data privacy are mandatory foundations for legal, consistent, and personalized experiences across channel markets and systems, the CDP specification, together with Apache Unomi, is a clear and welcome answer to end-user concerns."

"Apache Unomi is the perfect solution to implement a user profile platform," said Jean-Baptiste Onofré, Fellow at Talend. "It fully addresses the user trust and privacy needs, allowing to easily create user profile and Web marketing features. As Unomi is powered by Apache Karaf, it's also a great platform for several use cases, such as digital marketing in Web applications, managing user profiles on IoT devices, and more."

"Apache Unomi enables Al-Monitor readers to be driven towards additional personalized content that corresponds, via content tags profiling and related automated segmentations, to what they have already accessed," said Valerie Voci, Head of Digital Strategy and Marketing at Al-Monitor. "This data follows our customers where they go, so it's a consistent experience whether they are getting these recommendations in their inbox or on the Website or both. And if a change takes place on one, that change is immediately reflected on the other. It helps us create a very cohesive marketing message and a great overall digital experience."

"As we were developing a progressive web app (PWA) for a client, we were looking for a Customer Data Platform (CDP) to store customer insights, such as behavioral and explicit customer data," said Lars Petersen, Co-Founder at Altola. "Privacy was table stake for us, along with the flexibility to customize data schema and open API. We selected Apache Unomi based on these parameters, we had it up and running on AWS in less than 30 min. and are very impressed with the maturity of the platform, its privacy by design and how easy it was to work with."

"In a digital world, customer data is very important to offer a better experience to users. However, data privacy and trust is not an option for users," said François Papon, CTO at Yupiik. "Apache Unomi is the best solution for our clients because it's an Open Source project managed by an independent foundation, there is no vendor lock-in. It's also based on other solutions like Apache Karaf that made it ready for modularity, scalability, cloud, devops, and more." 

"Apache Unomi is poised to disrupt the Customer Data Platform market," said Thomas Sigdestad, CTO at Enonic, and co-chair, with Serge Huber, of the CDP standards work at OASIS open. "The CDP marketplace is lacking from a standard way of exchanging data, and the vendor space is over-represented by closed source and proprietary cloud offerings. This effectively limits the potential and adoption of CDP in general. Apache Unomi is not merely Open Source, but also the reference implementation of the imminent CDP standard from OASIS. Companies using Unomi will benefit from faster and simpler integrations without locking their customer data into yet another proprietary silo." 

"Graduating as an Apache Top-Level Project is only the beginning," added Huber. "Unomi has a lot of potential that it still to be developed, and is a perfect opportunity for those interested in Customer Data Privacy to participate through our mailing lists and Slack channel, and to learn more about the project on our Website and presentations."

Catch Apache Unomi in action at ApacheCon North America (9-12 September 2019 in Las Vegas, Nevada), and ApacheCon Europe (22-24 October 2019 in Berlin, Germany) http://apachecon.com/ .

Availability and Oversight
Apache Unomi software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Unomi, visit http://unomi.apache.org/

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects seeking to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 7,000 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Alibaba Cloud Computing, Anonymous, ARM, Baidu, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, Leaseweb, Microsoft, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, Tencent, Union Investment, Workday, and Verizon Media. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Unomi", "Apache Unomi", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

Tuesday February 19, 2019

The Apache® Software Foundation Announces Apache Arrow™ Momentum

Open Source Big Data in-memory columnar layer adopted by dozens of Open Source and commercial technologies; exceeded 1,000,000 monthly downloads within first three years as an Apache Top-Level Project

Wakefield, MA —19 February 2019— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, today announced momentum with Apache® Arrow™, the Open Source Big Data in-memory columnar layer.

Since the founding of the project in January 2016, Apache Arrow has quickly become the defacto standard for representing and processing analytical data in memory, accelerating analytical processing and interchange by more than 100x.

"When we became a Top-Level Project, we projected that the majority of the world's data will be processed through Arrow within the next decade," said Jacques Nadeau, Vice President of Apache Arrow. "In just three years time, we are proud to see Arrow's substantial industry adoption and increased value across a wide range of analytical, machine learning, and artificial intelligence workloads."

Highlights of Apache Arrow's success include:

Industry Adoption —more than 20 major technologies adopted Arrow to accelerate in-memory analytics, including Apache Spark, NVIDIA RAPIDS, pandas, and Dremio, among others. A list of known Open Source and commercial implementations can be found at https://arrow.apache.org/powered_by/

Millions of Downloads —leveraging and integrating Apache Arrow into many other technologies has bolstered downloads to more than 1,000,000 each month.

New Language Support —as a cross-language development platform, supporting multiple programming languages is paramount. Apache Arrow has grown from supporting one language to eleven different languages today; they include C++, Java, Python, R, C#, Javascript, and Ruby, among others.

Seamless Data Format Support —Arrow supports different data types, both simple and nested, located in arbitrary memory such as regular system RAM, memory-mapped files or on-GPU memory. In addition, it can ingest data from popular storage formats such as Apache Parquet, CSV files, Apache ORC, JSON, and more.

Major Code Donations —Apache Arrow's new features and expanded functionality are due in part to code and component donations that include:
  • C# Library
  • Gandiva LLVM-based Expression Compiler
  • Go Library
  • Javascript Library
  • Plasma Shared Memory Object Store
  • Ruby Libraries (Apache Arrow and Apache Parquet)
  • Rust Libraries (Parquet and DataFusion Query Engine)
Community and Contributor Growth —over the past 12 months, nearly 300 individuals have submitted more than 3,000 contributions that have grown the Apache Arrow code base by 300,000 lines of code. The Arrow community is welcoming approximately 10 new contributors each month.


In January the project announced its most recent release, Apache Arrow 0.12.0, which reflects more than 600 enhancements developed during Q4 2018. The Apache Arrow community is actively working on a number of impactful new initiatives that include solving high performance analytical problems and allowing for more efficient data distribution across entire clusters.

"Apache Arrow's rapid industry adoption and developer community growth supports our original thesis of the importance of a language-independent open standard for columnar data," said Wes McKinney, member of the Apache Arrow Project Management Committee, and creator of Python's pandas project. "Additionally, we are seeing productive collaborations take place not only between programming languages but also between the database systems and data science worlds. We look forward to welcoming more data system developers into our community."

About Apache Arrow
Apache Arrow is a cross-language development platform for in-memory data. It specifies a standardized language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware. It also provides computational libraries and zero-copy streaming messaging and interprocess communication. Languages currently supported include C, C++, C#, Go, Java, JavaScript, MATLAB, Python, R, Ruby, and Rust.

Availability and Oversight
Apache Arrow software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Arrow, visit http://arrow.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 7,000 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official global conference series. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Alibaba Cloud Computing, Anonymous, ARM, Baidu, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, LeaseWeb, Microsoft, Oath, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, Tencent, Union Investment, and Workday. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Arrow", "Apache Arrow", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Wednesday January 23, 2019

The Apache Software Foundation Announces Apache® Hadoop® v3.2.0

Pioneering Open Source distributed enterprise framework powers US$166B Big Data ecosystem

Wakefield, MA —23 January 2019— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, today announced Apache® Hadoop® v3.2.0, the latest version of the Open Source software framework for reliable, scalable, distributed computing.

Now in its 11th year, Apache Hadoop is the foundation of the US$166B Big Data ecosystem (source: IDC) by enabling data applications to run and be managed on large hardware clusters in a distributed computing environment. "Apache Hadoop has been at the center of this big data transformation, providing an ecosystem with tools for businesses to store and process data on a scale that was unheard of several years ago," according to Accenture Technology Labs.

"This latest release unlocks the powerful feature set the Apache Hadoop community has been working on for more than nine months," said Vinod Kumar Vavilapalli, Vice President of Apache Hadoop. "It further diversifies the platform by building on the cloud connector enhancements from Apache Hadoop 3.0.0 and opening it up for deep learning use-cases and long-running apps."

Apache Hadoop 3.2.0 highlights include:
  • ABFS Filesystem connector —supports the latest Azure Datalake Gen2 Storage;
  • Enhanced S3A connector —including better resilience to throttled AWS S3 and DynamoDB IO;
  • Node Attributes Support in YARN —helps to tag multiple labels on the nodes based on its attributes and supports placing the containers based on expression of these labels;
  • Storage Policy Satisfier  —supports HDFS (Hadoop Distributed File System) applications to move the blocks between storage types as they set the storage policies on files/directories; 
  • Hadoop Submarine —enables data engineers to easily develop, train and deploy deep learning models (in TensorFlow) on very same Hadoop YARN cluster;
  • C++ HDFS client —helps to do async IO to HDFS which helps downstream projects such as Apache ORC;
  • Upgrades for long running services —supports in-place seamless upgrades of long running containers via YARN Native Service API (application program interface) and CLI (command-line interface).

"This is one of the biggest releases in Apache Hadoop 3.x line which brings many new features and over 1,000 changes," said Sunil Govindan, Apache Hadoop 3.2.0 release manager. "We are pleased to announce that Apache Hadoop 3.2.0 is available to take your data management requirements to the next level. Thanks to all our contributors who helped to make this release happen."

Apache Hadoop is widely deployed at numerous enterprises and institutions worldwide, such as Adobe, Alibaba, Amazon Web Services, AOL, Apple, Capital One, Cloudera, Cornell University, eBay, ESA Calvalus satellite mission, Facebook, foursquare, Google, Hortonworks, HP, Huawei, Hulu, IBM, Intel, LinkedIn, Microsoft, Netflix, The New York Times, Rackspace, Rakuten, SAP, Tencent, Teradata, Tesla Motors, Twitter, Uber, and Yahoo. The project maintains a list of educational and production users, as well as companies that offer Hadoop-related services at https://wiki.apache.org/hadoop/PoweredBy

Global Knowledge hails, "...the open-source Apache Hadoop platform changes the economics and dynamics of large-scale data analytics due to its scalability, cost effectiveness, flexibility, and built-in fault tolerance. It makes possible the massive parallel computing that today's data analysis requires."

Hadoop is proven at scale: Netflix captures 500+B daily events using Apache Hadoop. Twitter uses Apache Hadoop to handle 5B+ sessions a day in real time. Twitter’s 10,000+ node cluster processes and analyzes more than a zettabyte of raw data through 200B+ tweets per year. Facebook’s cluster of 4,000+ machines that store 300+ petabytes is augmented by 4 new petabytes of data generated each day. Microsoft uses Apache Hadoop YARN to run the internal Cosmos data lake, which operates over hundreds of thousands of nodes and manages billions of containers per day.

Transparency Market Research recently reported that the global Hadoop market is anticipated to rise at a staggering 29% CAGR with a market valuation of US$37.7B by the end of 2023.

Apache Hadoop remains one of the most active projects at the ASF: it ranks #1 for Apache project repositories by code commits, and is the #5 repository by size (3,881,797 lines of code).

"The Apache Hadoop community continues to go from strength to strength in further driving innovation in Big Data," added Vavilapalli. "We hope that developers, operators and users leverage our latest release in fulfilling their data management needs."

Catch Apache Hadoop in action at the Strata conference, 25-28 March 2019 in San Francisco, and dozens of Hadoop MeetUps held around the world, including on 30 January 2019 at LinkedIn in Sunnyvale, California.

Availability and Oversight
Apache Hadoop software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Hadoop, visit http://hadoop.apache.org/ and https://twitter.com/hadoop

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 7,000 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official global conference series. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Alibaba Cloud Computing, Anonymous, ARM, Baidu, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, LeaseWeb, Microsoft, Oath, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, Tencent, and Union Investment. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Hadoop", "Apache Hadoop", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Tuesday January 08, 2019

The Apache Software Foundation Announces Apache® Airflow™ as a Top-Level Project

Open Source Big Data workflow management system in use at Adobe, Airbnb, Etsy, Google, ING, Lyft, PayPal, Reddit, Square, Twitter, and United Airlines, among others.

Wakefield, MA —8 January 2019— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today Apache® Airflow™ as a Top-Level Project (TLP).

Apache Airflow is a flexible, scalable workflow automation and scheduling system for authoring and managing Big Data processing pipelines of hundreds of petabytes. Graduation from the Apache Incubator as a Top-Level Project signifies that the Apache Airflow community and products have been well-governed under the ASF's meritocratic process and principles.

"Since its inception, Apache Airflow has quickly become the de-facto standard for workflow orchestration," said Bolke de Bruin, Vice President of Apache Airflow. "Airflow has gained adoption among developers and data scientists alike thanks to its focus on configuration-as-code. That has gained us a community during incubation at the ASF that not only uses Apache Airflow but also contributes back. This reflects Airflow’s ease of use, scalability, and power of our diverse community; that it is embraced by enterprises and start-ups alike, allows us to now graduate to a Top-Level Project."

Apache Airflow is used to easily orchestrate complex computational workflows. Through smart scheduling, database and dependency management, error handling and logging, Airflow automates resource management, from single servers to large-scale clusters. Written in Python, the project is highly extensible and able to run tasks written in other languages, allowing integration with commonly used architectures and projects such as AWS S3, Docker, Apache Hadoop HDFS, Apache Hive, Kubernetes, MySQL, Postgres, Apache Zeppelin, and more. Airflow originated at Airbnb in 2014 and was submitted to the Apache Incubator March 2016.

Apache Airflow is in use at more than 200 organizations, including Adobe, Airbnb, Astronomer, Etsy, Google, ING, Lyft, NYC City Planning, Paypal, Polidea, Qubole, Quizlet, Reddit, Reply, Solita, Square, Twitter, and United Airlines, among others. A list of known users can be found at https://github.com/apache/incubator-airflow#who-uses-apache-airflow

"Adobe Experience Platform is built on cloud infrastructure leveraging open source technologies such as Apache Spark, Kafka, Hadoop, Storm, and more," said Hitesh Shah, Principal Architect of Adobe Experience Platform. "Apache Airflow is a great new addition to the ecosystem of orchestration engines for Big Data processing pipelines. We have been leveraging Airflow for various use cases in Adobe Experience Cloud and will soon be looking to share the results of our experiments of running Airflow on Kubernetes." 

"Our clients just love Apache Airflow. Airflow has been a part of all our Data pipelines created in past 2 years acting as the ring-master and taming our Machine Learning and ETL Pipelines," said Kaxil Naik, Data Engineer at Data Reply. "It has helped us create a Single View for our client's entire data ecosystem. Airflow's Data-aware scheduling and error-handling helped automate entire report generation process reliably without any human-intervention. It easily integrates with Google Cloud (and other major cloud providers) as well and allows non-technical personnel to use it without a steep learning curve because of Airflow’s configuration-as-a-code paradigm."

"With over 250 PB of data under management, PayPal relies on workflow schedulers such as Apache Airflow to manage its data movement needs reliably," said Sid Anand, Chief Data Engineer at PayPal. "Additionally, Airflow is used for a range of system orchestration needs across many of our distributed systems: needs include self-healing, autoscaling, and reliable [re-]provisioning."

"Since our offering of Apache Airflow as a service in Sept 2016, a lot of big and small enterprises have successfully shifted all of their workflow needs to Airflow," said Sumit Maheshwari, Engineering Manager at Qubole. "At Qubole, not only are we a provider, but also a big consumer of Airflow as well. For example, our whole Insight and Recommendations platform is built around Airflow only, where we process billions of events every month from hundreds of enterprises and generate insights for them on big data solutions like Apache Hadoop, Apache Spark, and Presto. We are very impressed by the simplicity of Airflow and ease at which it can be integrated with other solutions like clouds, monitoring systems or various data sources."

"At ING, we use Apache Airflow to orchestrate our core processes, transforming billions of records from across the globe each day," said Rob Keevil, Data Analytics Platform Lead at ING WB Advanced Analytics. "Its feature set, Open Source heritage and extensibility make it well suited to coordinate the wide variety of batch processes we operate, including ETL workflows, model training, integration scripting, data integrity testing, and alerting. We have played an active role in Airflow development from the onset, having submitted hundreds of pull requests to ensure that the community benefits from the Airflow improvements created at ING.  We are delighted to see Airflow graduate from the Apache Incubator, and look forward to see where this exciting project will be taken in future!"

"We saw immediately the value of Apache Airflow as an orchestrator when we started contributing and using it," said Jarek Potiuk, Principal Software Engineer at Polidea. "Being able to develop and maintain the whole workflow by engineers is usually a challenge when you have a huge configuration to maintain. Airflow allows your DevOps to have a lot of fun and still use the standard coding tools to evolve your infrastructure. This is 'infrastructure as a code' at its best."

"Workflow orchestration is essential to the (big) data era that we live in," added de Bruin. "The field is evolving quite fast and the new data thinking is just starting to make an impact. Apache Airflow is a child of the data era and therefore very well positioned, and is also young so a lot of development can still happen. Airflow can use bright minds from scientific computing, enterprises, and start-ups to further improve it. Join the community, it is easy to hop on!"

Availability and Oversight
Apache Airflow software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Airflow, visit http://airflow.apache.org/ and https://twitter.com/ApacheAirflow

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 7,000 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Alibaba Cloud Computing, Anonymous, ARM, Baidu, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, LeaseWeb, Microsoft, Oath, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, Tencent, and Union Investment. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Airflow", "Apache Airflow", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Wednesday December 12, 2018

The Apache Software Foundation Announces Apache® Griffin™ as a Top-Level Project

Open Source Big Data quality solution in use at eBay, Expedia, Huawei, JD.com, Meituan, PayPal, Pingan Bank, PPDAI, VIP.com, VMWare, and more.

Wakefield, MA —12 December 2018— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today Apache® Griffin™ as a Top-Level Project (TLP).

Apache Griffin is a robust Open Source Big Data quality solution for distributed data systems at any scale. It provides a unified process to measure data quality from different perspectives, as well as building and validating trusted data assets in both streaming or batch contexts. Griffin originated at eBay and entered the Apache Incubator in December 2016.

"We are very proud of Griffin reaching this important milestone," said William Guo, Vice President of Apache Griffin. "By actively improving Big Data quality, Griffin helps build trusted data assets, therefore boosting your confidence in your business." 

Apache Griffin enables data scientists/analysts to handle data quality issues by:
  • Defining –specifying data quality requirements such as accuracy, completeness, timeliness, profiling, etc.;

  • Measuring –source data ingested into the Griffin computing cluster will apply data quality measurement based on user-defined requirements; and

  • Applying Metrics –data quality reports as metrics will be exported to designated destination.

In addition, Griffin allows users to easily onboard new requirements into the platform and write comprehensive logic to further define their data quality. 

Apache Griffin is in use in high volume, high demand environments at 163.com/Netease, eBay, Expedia, Huawei, JD.com, Meituan, PayPal, Pingan Bank, PPDAI, VIP.com, and VMWare, among others.

"eBay contributed Griffin to the Apache Incubator in December 2016 to ensure its future development in a community-driven manner. It started with the idea on how eBay could address the data quality issue across multiple systems, especially in streaming context," said Vivian Tian, VP of eBay, GM - China Center of Excellence. "Griffin brings data quality solution to data ecosystem and ensure data applications have a solid quality foundation. We are extremely happy to see Griffin graduate as an Apache Top Level Project, and look forward to continued innovation and collaboration with the Apache community."

"We have been using Apache Griffin for about two years, monitoring 1000+ tables with data quality metrics, and are very happy to see it graduate to a Top-Level Project," said Chao Zhu, Senior Director at VIPshop Finance. "Apache Griffin and its data quality DSL can help us easily identify data quality issues instantly on our big data platform. In addition, Apache Griffin's architecture is highly extensible. We are looking forward to using it in real time data quality management system. We also look forward to contribute some of our minor enhancement to Griffin back to the community."

"We appreciate the Griffin project which really helps so much in our daily data jobs.After years of struggling with the complexity of data quality issues, we turned to Apache Griffin for a new platform that would simplify our data quality pipeline," said Jianfeng Liu, Director of Real-time Data Department at PPDAI. "Because of Apache Griffin's unified model for both batch and stream processing, we've been able to replace legacy systems with one solution that works seamlessly in our production environment. Griffin DSLs have allowed us to dramatically simplify our pipeline and to reduce our efforts a lot. I'm very proud and excited to see that the project is graduating."

"Apache Griffin is one of the best data quality solutions which my team has been used so far. It has been an exciting journey seeing the Griffin community evolve rapidly. And many people iteratively adopting it and contributing to newer capabilities," said Austin Sun, Senior Engineering Manager, Enterprise Service Platform at PayPal. "In PayPal risk domain, we benefit a lot from Apache Griffin to provide high quality data to make precise decision and protect our customer. In addition to PayPal risk, I knew there are several other corporates also leverages core capability from Griffin as their data quality solution. It’s my great honor to witness Griffin grows to a top level project. Way to go, Griffin."

"Apache Griffin project is yet another showcase how community over code can work for projects coming out from internal usages of companies into the open source," said Henry Saputra, ASF member and Incubator Mentor for Apache Griffin. "I am proud to be the part of the projects and mentors for the project when it was being contributed from eBay, in addition to several other projects already donated to ASF such as Apache Kylin and Eagle. The team has worked tremendously hard to adapt the Apache Way, and also shown great respect for the open source community in all the processes for design, development, and release processes.As a Top-Level Project I believe the PMC will help lead the project to much more success in the future."

"Graduation is not the end, it is the beginning of another journey. We hope to take Apache Griffin to the next level with a wider set of features and users," added Guo. "We welcome anyone to join our efforts by helping with product design, documentation, code, technical discussions or promoting Apache Griffin in The Apache Way."

Availability and Oversight
Apache Griffin software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Griffin, visit http://griffin.apache.org/ and https://twitter.com/apachegriffin

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 6,800 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Alibaba Cloud Computing, Anonymous, ARM, Baidu, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, LeaseWeb, Microsoft, Oath, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, Tencent, and Union Investment. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Griffin", "Apache Griffin", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Thursday August 23, 2018

The Apache Software Foundation Announces Apache® HAWQ® as a Top-Level Project

Advanced Big Data query engine and analytic database in use at Alibaba, Haier, VMWare, ZTESoft, and hundreds more.

Wakefield, MA —23 August 2018— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today Apache® HAWQ® as a Top-Level Project (TLP).

Apache HAWQ is an advanced enterprise SQL-on-Hadoop query engine and analytic database. It combines the key technological advantages of MPP database with the scalability and convenience of Apache Hadoop. HAWQ reads data from and writes data to HDFS natively, delivers industry-leading performance and linear scalability, and provides users with a complete, standards compliant SQL interface.

"We are very excited to see Apache HAWQ graduate as a Top-Level Project and we would like to thank our Incubation mentors for all their help," said Dr. Lei Chang, Vice President of Apache HAWQ. "This is a huge milestone that reflects the collective contributions from the growing global community to deliver a world-class SQL engine for analytics."

HAWQ operates natively in Apache Hadoop to provide users the tools to confidently and successfully interact with petabyte-range data sets. Features include:
  • Exceptional performance: parallel processing architecture delivers high performance throughput and low latency —potentially near real time— query responses that can scale to petabyte-sized datasets;
  • Robust ANSI SQL compliance: leverage familiar skills. Achieve higher levels of compatibility for SQL-based applications and BI/data visualization tools. Execute complex queries and joins, including roll-ups and nested queries; and 
  • Apache Hadoop ecosystem integration: integrate and manage with Apache YARN. Provision with Apache Ambari. Interface with Apache HCatalog. Supports Apache Parquet, Apache HBase, and others. Easily scales nodes up or down to meet performance or capacity requirements.

Apache HAWQ is in use at Alibaba, Haier, VMware, ZTESoft, and hundreds of users around the world.

"We admire Apache HAWQ's flexible framework and ability to scale up in a Cloud ecosystem. HAWQ helps those seeking a heterogeneous computing system to handle ad-hoc queries and heavy batch workloads," said Kuien Liu, Computing Platform Architect at Alibaba. "Alibaba encourages more and more engineers to continue to embrace Open Source, and Apache HAWQ stands out as a star project. We are proud to have been collaborating with this community since 2015."

"Haier Group has deployed clusters of more than 30 nodes in the production environment from the very beginning of HAWQ," said Xiaoliang Wu, Big Data Architect at Haier. "We use HAWQ as an ad-hoc query and batch computation engine in areas such as social network services and IOT. Because of its superior scalability and stability, HAWQ greatly improves development efficiency and reduces operation and maintenance costs. We believe that Apache HAWQ is a very competitive product in the SQL-On-Hadoop field."

"We have been using Apache HAWQ at VMware for 4 years now," said Dominie Jacob, Lead Big Data Engineer at VMware Inc. "It is easy to manage and scale using Apache Ambari, and easy to provision and attach more nodes based on demand. Being virtualized, it is easy to provision and attach more nodes based on demand. In our BI Big Data world, HAWQ is the primary database for accessing the Hadoop datasets, building models, and executing predictive model workflows. HAWQ is working seamlessly with billions of records, thousands of Tables/Functions/Tableau-Reports, and hundreds of users. The demand for HAWQ is increasing. As VMware always encourages us to pick up and contribute back to Open Source technologies, we would love to collaborate with this community and see more enhancements. In our BI space, HAWQ is one of the top priorities."

"Apache HAWQ is an attractive technology for Big Data applications," said Zixu Zhao, Architect at ZTESoft. "HAWQ serves as the foundation of our Big Data platform and it has been used in a lot of applications, such as interactive analytics and BI on telecom data. We congratulate HAWQ on becoming an Apache Top-Level Project."

"Becoming an Apache Top-Level Project is an important milestone," added Chang. "There is much work ahead of us, and we look forward to growing the HAWQ community and codebase."

Catch Apache HAWQ in action at ApacheCon North America 24-27 September 2018 http://apachecon.com/acna18/ .

Availability and Oversight

Apache HAWQ software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache HAWQ, visit http://hawq.apache.org/ and https://twitter.com/ApacheHAWQ .

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 6,800 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Anonymous, ARM, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Hortonworks, Huawei, IBM, Indeed, Inspur, LeaseWeb, Microsoft, Oath, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, and Union Investment. For more information, visit http://apache.org/ and https://twitter.com/TheASF


© The Apache Software Foundation. "Apache", "HAWQ", "Apache HAWQ", "Ambari", "Apache Ambari", "Hadoop", "Apache Hadoop", "HBase", "Apache HBase", "HCatalog", "Apache HCatalog", "Parquet", "Apache Parquet", "YARN", "Apache YARN", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Wednesday January 10, 2018

The Apache Software Foundation Announces Apache® Trafodion™ as a Top-Level Project

Mature Big Data database management system for working in SQL at Apache Hadoop-scale levels in use China Mobile, China Unicom, Dell EMC, Esgyn Corporation, and Millersoft Limited, among others.

Forest Hill, MD —10 January 2018— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® Trafodion™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.

Apache Trafodion extends Apache Hadoop to guarantee transactional integrity and operational workloads for new kinds of Big Data applications that run on Hadoop.

 "We are very excited to have been established as an Apache Top-Level Project," said Pierre Smits, Vice President of Apache Trafodion. "Graduation is a terrific milestone that culminates 2.5 years of contributions from around the globe to establishing a growing community committed to delivering a high-grade OLTP solution on top of the Apache Hadoop ecosystem."

Building on the scalability, elasticity, and flexibility of Hadoop, Trafodion (meaning "transactions" in Welsh) is the first integrated Open Source solution that delivers on the promise of integrated transactional and analytical systems (OLTP/OLAP) for Apache Hadoop. Trafodion's features include:
  • Fully functional ANSI SQL support, leveraging existing SQL skills;
  • Distributed ACID data protection, guaranteeing data consistency across multiple tables and rows;
  • Compile-Time and Run-Time Optimizers, delivering performance improvements for OLTP workloads;
  • Parallel-aware Query Optimizer, supporting large data sets;
  • Apache Spark integration, supporting streaming analysis;
  • Interoperability with existing Apache Hadoop tools and solutions, such as Hive, Ambari, Flume, Kafka, and Oozie; and 
  • Apache Hadoop and Linux distribution neutrality.

Trafodion originated at HP-IT in 2013, and was donated to the Apache Incubator in May 2015. The project has had four official releases since entering the Apache Incubator. 

Apache Trafodion is in use at China Mobile, China Unicom, Dell EMC, Esgyn Corporation, and Millersoft Limited, among others.

"As a member of the HP Core Team responsible for releasing Trafodion to The Apache Software Foundation, and responsible for the project’s name, I'm thrilled to see the Trafodion community be recognized with this major achievement. Congratulations to all who made it possible," said Ken Holt, COO at Esgyn Corporation. "Trafodion is the heart of EsgynDB, and the community is like its lifeblood — we at Esgyn are committed to continue to grow and support the community."

"Congratulations to the Trafodion community for becoming an Apache Top-Level Project," said Tianduo Gao, Senior Development Engineer of Software Technology (Suzhou) at China Mobile. "We are planning to use Trafodion to expand the business of China Mobile's Big Data platform: our data statistics of 4G real-time business in the country and provinces are more efficient than ever before."

"Becoming a core Apache Project is a major step forward for Trafodion. It will give Millersoft the confidence to introduce the technology to our Big Data clients," said Calum Miller, Director of Millersoft Limited. "Testing of our Open Source Data Vault engine running on top of Apache Trafodion is going well and we look forward to announcing a fully integrated product shortly."

"Apache Trafodion enhanced the operational efficiency of our Big Data platforms, and brought us better customer experience and broader application scenarios," said Charles Yu, Managing Director, Application Services at Dell EMC.

"Congratulations to Trafodion for officially becoming part of the Apache open source ecosystem," said Qingquan Gu, Senior Development Engineer of Internet of Things Marketing Service Center at China Unicom. "Using Trafodion provided China Unicom with the ability to build and integrate Big Data platforms, enhanced our operational efficiency, and brought us better customer experience."

"Becoming an Apache Top-Level Project is only the beginning," added Smits. "We are looking forward to growing the Trafodion community, reaching new adopters and contributors, and fostering a strong ecosystem around the project."

Availability and Oversight
Apache Trafodion software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Trafodion, visit http://trafodion.apache.org/ and https://twitter.com/Trafodion

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 680 individual Members and 6,300 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Facebook, Google, Hewlett Packard, Hortonworks, Huawei, IBM, Inspur, iSIGMA, ODPi, LeaseWeb, Microsoft, PhoenixNAP, Pivotal, Private Internet Access, Red Hat, Serenata Flowers, Target, Union Investment, WANdisco, and Yahoo. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Trafodion", "Apache Trafodion", "Hadoop", "Apache Hadoop", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Thursday December 14, 2017

The Apache Software Foundation Announces Apache® Hadoop® v3.0.0 General Availability

Ubiquitous Open Source enterprise framework maintains decade-long leading role in $100B annual Big Data market

Forest Hill, MD —14 December 2017— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, today announced Apache® Hadoop® v3.0.0, the latest version of the Open Source software framework for reliable, scalable, distributed computing.

Over the past decade, Apache Hadoop has become ubiquitous within the greater Big Data ecosystem by enabling firms to run and manage data applications on large hardware clusters in a distributed computing environment.

"This latest release unlocks several years of development from the Apache community," said Chris Douglas, Vice President of Apache Hadoop. "The platform continues to evolve with hardware trends and to accommodate new workloads beyond batch analytics, particularly real-time queries and long-running services. At the same time, our Open Source contributors have adapted Apache Hadoop to a wide range of deployment environments, including the Cloud."

"Hadoop 3 is a major milestone for the project, and our biggest release ever," said Andrew Wang, Apache Hadoop 3 release manager. "It represents the combined efforts of hundreds of contributors over the five years since Hadoop 2. I'm looking forward to how our users will benefit from new features in the release that improve the efficiency, scalability, and reliability of the platform."

Apache Hadoop 3.0.0 highlights include:
  • HDFS erasure coding —halves the storage cost of HDFS while also improving data durability;
  • YARN Timeline Service v.2 (preview) —improves the scalability, reliability, and usability of the Timeline Service;
  • YARN resource types —enables scheduling of additional resources, such as disks and GPUs, for better integration with machine learning and container workloads;
  • Federation of YARN and HDFS subclusters transparently scales Hadoop to tens of thousands of machines;
  • Opportunistic container execution improves resource utilization and increases task throughput for short-lived containers. In addition to its traditional, central scheduler, YARN also supports distributed scheduling of opportunistic containers; and 
  • Improved capabilities and performance improvements for cloud storage systems such as Amazon S3 (S3Guard), Microsoft Azure Data Lake, and Aliyun Object Storage System.

Hadoop 3.0.0 has already undergone extensive testing and integration with the broader Open Source ecosystem at The Apache Software Foundation. With this release, its community of developers and users promote this release series out of beta.

Apache Hadoop is widely deployed at numerous enterprises and institutions worldwide, such as Adobe, Alibaba, Amazon Web Services, AOL, Apple, Capital One, Cloudera, Cornell University, eBay, ESA Calvalus satellite mission, Facebook, foursquare, Google, Hortonworks, HP, Hulu, IBM, Intel, LinkedIn, Microsoft, Netflix, The New York Times, Rackspace, Rakuten, SAP, Tencent, Teradata, Tesla Motors, Twitter, Uber, and Yahoo. The project maintains a list of known users at https://wiki.apache.org/hadoop/PoweredBy

"It's tremendous to see this significant progress, from the raw tool of eleven years ago, to the mature software in today's release," said Doug Cutting, original co-creator of Apache Hadoop. "With this milestone, Hadoop better meets the requirements of its growing role in enterprise data systems.  The Open Source community continues to respond to industrial demands."

Apache Hadoop's diverse community enjoys continued growth amongst the ASF's most active projects, and remains at the forefront of more than three dozen Apache Big Data projects.

Apache Hadoop committer history

Apache Hadoop has received countless awards, including top prizes at the Media Guardian Innovation Awards and Duke's Choice Awards, and has been hailed by industry analysts:

"...the lifeblood of organizational analytics…" —Gartner

"Hadoop Is Here To Stay" —Forrester

"...today Hadoop is the only cost-sensible and scalable open source alternative to commercially available Big Data management packages. It also becomes an integral part of almost any commercially available Big Data solution and de-facto industry standard for business intelligence (BI)." —MarketAnalysis.com/Market Research Media

"...commanding half of big data’s $100 billion annual market value...Hadoop is the go-to big data framework." —BigDataWeek.com

"Hadoop, and its associated tools, is currently the 'big beast' of the big data world and the Hadoop environment is undergoing rapid development..." —Bloor Research


"The opportunity to effect meaningful, even fundamental change in the Apache Hadoop project remains open," added Douglas. "Our new contributors uprooted the project from its historical strength in Web-scale analytics by introducing powerful, proven abstractions for data management, security, containerization, and isolation. Apache Hadoop drives innovation in Big Data by growing its community. We hope this latest release continues to draw developers, operators, and users to the ASF."

Catch Apache Hadoop in action at the Strata Data Conference in San Jose, CA, 5-8 March 2018, and at dozens of Hadoop Meetups held around the world.

Availability and Oversight
Apache Hadoop software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Hadoop, visit http://hadoop.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server —the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 680 individual Members and 6,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Facebook, Google, Hortonworks, Huawei, IBM, Inspur, iSIGMA, ODPi, LeaseWeb, Microsoft, PhoenixNAP, Pivotal, Private Internet Access, Red Hat, Serenata Flowers, Target, Union Investment, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Hadoop", "Apache Hadoop", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Tuesday November 28, 2017

The Apache Software Foundation Announces Apache® Impala™ as a Top-Level Project

High performance analytic database for Apache Hadoop in-Cloud or on-premises in use at Caterpillar, Cox Automotive, Jobrapido, Marketing Associates, the New York Stock Exchange, phData, and Quest Diagnostics, among others.

Forest Hill, MD —28 November 2017— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® Impala™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.
Apache Impala is a modern, high-performance analytic database for Apache Hadoop. The massively parallel processing (MPP) SQL query engine allows for analytical queries on data stored on-premises (in HDFS or Apache Kudu) or in Cloud object storage via SQL or business intelligence tools without having to migrate data sets into specialized systems or proprietary formats.

"The Impala project has grown a lot since we entered incubation in December 2015," said Jim Apple, Vice President of Apache Impala. "With the help of our mentors and the Incubator, we have grown as a community and adopted the Apache Way, all while the Impala contributors have helped make Impala more stable and performant."

In addition to using the same unified storage platform as other Hadoop components, Impala also uses the same metadata, SQL syntax (Apache Hive SQL), ODBC driver, and user interface (Impala query UI in Hue) as Hive. This provides a familiar and unified platform for real-time or batch-oriented queries. Impala provides:
  • A familiar SQL interface that data scientists and analysts already know;
  • The ability to query high volumes of data (Big Data) in Apache Hadoop;
  • Distributed queries in a cluster environment, for convenient scaling and to make use of cost-effective commodity hardware;
  • The ability to share data files between different components with no copy or export/import step; for example, to write with Apache Pig, transform with Hive and query with Impala. Impala can read from and write to Hive tables, enabling simple data interchange using Impala for analytics on Hive-produced data; and
  • A single system for big data processing and analytics, so customers can avoid costly modeling and ETL just for analytics.

Impala was inspired by Google's F1 database, which also separates query processing from storage management. It was originally released in 2012 and entered the Apache Incubator in December 2015. The project has had four releases during its incubation process.

"In 2011, we started development of Impala in order to make state-of-the-art SQL analytics available to the user community as open-source technology," said Marcel Kornacker, original founder of the Impala project. "The graduation to an Apache Top-Level Project is a recognition of the exceptional developer community that stands behind this project."

Apache Impala is deployed across a number of industries such as financial services, healthcare, and telecommunications, and is in use at companies that include Caterpillar, Cox Automotive, Jobrapido, Marketing Associates, the New York Stock Exchange, phData, and Quest Diagnostics. In addition, Impala is shipped by Cloudera, MapR, and Oracle.

"Apache Impala is our interactive SQL tool of choice. Over 30 phData customers have it deployed to production," said Brock Noland, Chief Architect at phData. "Combined with Apache Kudu for real-time storage, Impala has made architecting IoT and Data Warehousing use-cases dead simple. We can deploy more production use-cases with fewer people, delivering increased value to our customers. We're excited to see Impala graduate to a top-level project and look forward to contributing to its success."

"We use Apache Impala to boost performance of our SQL queries against our data lake," said Matteo Coloberti, Head of Analytics at Jobrapido. "Impala is an incredible service that gives us impressive performance on queries."

"We used to distribute Microsoft Excel reports to clients every one or two days but now they can search on their own by customer, sales deal, or even service type," said Andy Frey, CTO of Marketing Associates. "Apache Impala is used to query millions of rows to identify specific records that match the clients' criteria. We've even given clients a 'Query Hadoop' option that allows them to create simple SQL statements and query Hadoop directly via Impala. We're able to offer a faster, richer, and more accurate selection of services without the labor or latency concerns that we used to have."

"The Apache Impala community is growing, and we welcome new contributors to join in our efforts in our code, documentation, issue tracker, and discussion forums," added Apple.

Catch Apache Impala in action at Not Another Big Data Conference, taking place 12 December 2017 in Palo Alto, CA.

Availability and Oversight
Apache Impala software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Impala, visit http://impala.apache.org/ and https://twitter.com/ApacheImpala

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 680 individual Members and 6,300 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Facebook, Google, Hewlett Packard, Hortonworks, Huawei, IBM, Inspur, iSIGMA, ODPi, LeaseWeb, Microsoft, PhoenixNAP, Pivotal, Private Internet Access, Red Hat, Serenata Flowers, Target, WANdisco, and Yahoo. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Impala", "Apache Impala", "Hadoop", "Apache Hadoop", "Hive", "Apache Hive", "Kudu", "Apache Kudu", "Pig", "Apache Pig", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Tuesday October 03, 2017

Response From The Apache® Software Foundation To Questions From US House Committee On Energy And Commerce Regarding Equifax Data Breach

On 19 September 2017 The Apache® Software Foundation ("ASF") http://apache.org/ was contacted by the US House Committee on Energy and Commerce to answer questions in preparation for their hearing on 3 October regarding the Equifax data breach.

The official response from the ASF follows.

= = =

RESPONSES TO QUESTIONS FROM

US HOUSE COMMITTEE ON ENERGY AND COMMERCE

BACKGROUND:

We think that it is important to provide background about The Apache Software Foundation ("ASF") and its projects as the ASF is very different from conventional for-profit software companies.

The ASF:

 - interacts with the users of its software and provides patches in a different manner than such conventional for-profit software companies;
 - is a not-for-profit foundation qualified under Section 501(c)(3) of the IRS regulations;
 - develops, shepherds, and incubates hundreds of Open Source software projects that are run solely by volunteers, with some Foundation-level operations and services (such as infrastructure, administration, and marketing) provided by paid staff;
 - provides all of its Open Source software free of charge to the public at-large;
 - is financially supported by donations from corporations and  individuals; 
 - is vendor neutral: participation is limited to individuals, irrespective of affiliation or employment status.

Code for Apache projects is written by more than 6,000 volunteer individuals and employees of corporations across six continents and contributed to the ASF at no cost. The ASF maintains records of contributors solely through its list of "contributor license agreements". All individuals who are granted write access to the Apache repositories must submit an Individual Contributor License Agreement (ICLA). Corporations that have assigned employees to work on Apache projects as part of an employment agreement may sign a Corporate CLA (CCLA) for contributing intellectual property via the corporation. The ASF has confirmed that it has not received a CCLA from Equifax, nor has it received code contributions by Equifax employees (although the ASF cannot determine whether an individual contributor is affiliated with Equifax).

Each Apache software project is managed by a Project Management Committee ("PMC"), a self-selected team of active contributors to the project. A PMC guides the project's day-to-day operations, including community development and product releases. The PMC oversees the software development for the projects, including any patches to those projects, which is available for anyone for download from the apache.org website and numerous global mirror sites. Releases of code for Apache
are managed by the PMC, who distinguish between project software releases and patches published to our issue trackers. New releases that include patches are created, voted on by the PMC, and made available for download. The ASF then alerts the community to the patches. Unlike conventional for-profit software companies, the ASF does not provide the patches directly to the users of its software projects.

The ASF does not provide conventional for-profit maintenance contracts or support the way a conventional for-profit software company would because Apache is a charitable organization composed of volunteers. The ASF provides its projects the facility to maintain numerous mailing lists to share with their developer and user communities project-related news and updates, technical discussions, troubleshooting, recommendations, and assistance in an open forum. Some conventional for-profit software companies package software produced by Apache in order to provide more comprehensive support or provide consulting support services.

RESPONSES TO QUESTIONS FROM US HOUSE COMMITTEE ON ENERGY AND COMMERCE:

1) When did the ASF learn of the vulnerability that became CVE-2017-5638?

On 14 February 2017, the Apache Struts PMC first received report of the vulnerability which became CVE-2017-5638. The ASF does not have direct information about whether the CVE-2017-5638  vulnerability caused the Equifax hack.

2) How did the ASF learn of it?

The Apache Struts PMC received a report via its security mailing list from Nike Zheng about the vulnerability. 

3) When did the ASF make a patch available for CVE-2017-5638?

ASF provided a patch for the CVE-2017-5638 bug on 7 March 2017, the same day on which it was reported on its blog. On 7 March 2017, the Apache Struts PMC officially posted an announcement about the vulnerability, along with two Struts releases that fixed it

http://struts.apache.org/announce.html#a20170307
http://struts.apache.org/announce.html#a20170307-2

4) Did the Foundation provide guidance on how the patch/update should be installed (my understanding is that it was a bit more complicated than a traditional patch)?

The patch was released as part of a full release of the Apache Struts project, which means users had to upgrade to the latest version, which is the simplest way of implementing the patch.  The Apache Struts PMC also provided other options, including information about using different implementation of the Multipart parser or filtering out suspicious requests, and other options to implement the patch http://struts.apache.org/docs/s2-045.html . In addition, on 20 March 2017 the Apache Struts PMC released two custom plug-ins to resolve the vulnerability without upgrading to the latest version 
http://struts.apache.org/announce.html#a20170320

5) The ASF's software is all open-source, as we understand it:

Yes: all ASF software projects are provided under the Apache Software License, version 2, an  Open Source Software (OSS) license.

For large organizations like Equifax that rely on Apache’s OSS, do they:

i.      Provide financial assistance, such as donations, to help pay for maintenance of the codebase?

While financial assistance is not required for using ASF software projects, some corporations choose to provide financial assistance through donations.  However the number of companies that provide donations is a very small percentage of the total corporate users of ASF projects.

Donations to ASF go to a general fund and are not targeted for the development, maintenance, or influence of particular projects.

ii.     Provide "volunteers" who help craft/review/patch code?

Some corporations ask that employees contribute to certain projects, but, as noted above, the number of companies that have their employees contribute to ASF projects is a very small percentage of   the users of ASF projects.

iii.    Provide other assistance to help maintain the availability and/or quality of the OSS?

Some corporations provide products, sales, and support services for Apache projects. These organizations have no direct relationship with the ASF. As noted above, the number of companies that have their employees contribute to ASF projects is a very small percentage of the corporate users of ASF projects.

# # #

Monday September 25, 2017

The Apache Software Foundation Announces Apache® RocketMQ™ as a Top-Level Project

Open Source distributed messaging and streaming Big Data platform in use at Alibaba Group, Didi Chuxing, S.F. Express, WeBank, Peking University, and Chinese Academy of Sciences, among others.

Forest Hill, MD –25 September 2017– The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® RocketMQ™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.

Apache RocketMQ is an Open Source distributed messaging and streaming Big Data platform with low latency, high performance and reliability, trillion-level capacity and flexible scalability.

"I am very excited to see Apache RocketMQ as a Top-Level Project and I would like to thank our mentors for all their help, the Apache Incubator Project Management Committee for its advice and guidance, everyone in the RocketMQ community, and Alibaba for publishing the research upon which RocketMQ is based," said Xiaorui Wang, Vice President of Apache RocketMQ. "During the incubation process, the RocketMQ community worked very hard to develop high-quality distributed software for messaging and streaming, in an open and inclusive manner in accordance with the Apache Way."

RocketMQ originated at Alibaba in 2012, and, after handling 1.2 trillion concurrent online message transmissions in the Alibaba Nov. 11th Global Shopping Festival, was donated to the Apache Incubator in November 2016. Apache RocketMQ v4.0.0 was released in February 2017.

As a distributed messaging engine, RocketMQ features include:
  • Low latency; more than 99.6% response latency within 1 millisecond under high pressure;
  • Finance-oriented, high availability with tracking and auditing features;
  • Industry-sustainable, trillion-level message capacity guaranteed;
  • Vendor-neutral, support multiple messaging protocols like JMS and OpenMessaging;
  • Big Data friendly, batch transferring with versatile integration for flooding throughput; and
  • Massive accumulation, given sufficient disk space, accumulate messages without performance loss.

"RocketMQ was conceived from the outset as an open-source distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level capacity and flexible scalability," said Von Gosling, original co-creator of RocketMQ and Chief Architect of Aliware MQ at Alibaba Group. "It has been great to witness the growth of the RocketMQ community and codebase as an ASF incubating project, and I look forward to this continuing as a Top-Level Project. Today, more than 100 companies are using Apache RocketMQ, with more feedback coming from the community. According to our data, more than 80% of the project's contributions are from outside the donator Alibaba Group."

In addition to Alibaba Group, Apache RocketMQ is in use at hundreds of companies and research/educational institutions that include Didi Chuxing, S.F. Express, WeBank, Peking University, and Chinese Academy of Sciences, among others.

"Graduation from the Incubator marks an important milestone for the RocketMQ project," said Bruce Snyder, Apache RocketMQ Incubator Mentor and Director of Software Development at SAP Hybris. "This is recognition of the focus and hard work of the project members to learn The Apache Way and drive community around RocketMQ. I am honored to have helped guide the project to a successful graduation."

"At Didi, we have used Apache RocketMQ as storage engine to build MessageQueue service. Based on high availability and high performance of RocketMQ we provide high-quality service," said Neil Qi, Architect at Didi Chuxing. "I believe RocketMQ will become the best MessageQueue project in future."

"New participants are more than welcome to join the project, To serve the community better, we created and maintained two repositories, one as our kernel version and the other one is for community contributions. The community contributed some integrated projects with some other Apache TLPs like Apache Storm, Apache Ignite, Apache Spark and Apache Flume," said Xinyu "yukon" Zhou, member of the Apache RocketMQ Project Management Committee. "We enthusiastically look forward to working together with all contributors to Apache RocketMQ in order to advance the state-of-the-art distributed messaging engine."

Availability and Oversight
Apache RocketMQ software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache RocketMQ, visit http://rocketmq.apache.org/ and https://twitter.com/ApacheRocketMQ

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 650 individual Members and 6,200 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Facebook, Google, Hortonworks, HP, Huawei, IBM, Inspur, iSigma, LeaseWeb, Microsoft, ODPi, PhoenixNAP, Pivotal, Private Internet Access, Red Hat, Serenata Flowers, Target, WANdisco, and Yahoo. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "RocketMQ", "Apache RocketMQ", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Thursday September 14, 2017

MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit

Who: Apache® Struts™ is a popular Open Source framework for creating enterprise-grade Java Web applications. Apache Struts powers front- and back-end applications and Internet of Things (IoT) devices for many of the world's most visible financial institutions, government organizations, technology service providers, telecommunications agencies, and Fortune 100 companies.

Apache Struts is an Apache Software Foundation Top-Level Project (since 2004) and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases.

What: On 7 September 2017, credit reporting agency Equifax announced a data breach affecting 143 million consumers. https://investor.equifax.com/news-and-events/news/2017/09-07-2017-213000628

Following this announcement, additional claims stated that the breach was caused by CVE-2017-9805, an exploit in Apache Struts that was disclosed on 4 September 2017. https://qz.com/1073221/the-hackers-who-broke-into-equifax-exploited-a-nine-year-old-security-flaw/

On 9 September 2017, the Apache Struts PMC issued a statement on the Equifax data breach that included details on its response process to reported vulnerabilities and also provided recommended security guidelines. https://s.apache.org/8thB

On 13 September 2017, Equifax issued a statement confirming that "The vulnerability was Apache Struts CVE-2017-5638". https://www.equifaxsecurity2017.com/

This vulnerability was patched on 7 March 2017, the same day it was announced. https://cwiki.apache.org/confluence/display/WW/S2-045

In conclusion, the Equifax data compromise was due to their failure to install the security updates provided in a timely manner.

When: Apache Struts CVE-2017-5638 was originally reported on 7 March 2017.

Where: For downloads, documentation (including security guide and bulletins), and how to become involved with Apache Struts, visit http://struts.apache.org/ and https://twitter.com/TheApacheStruts

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 650 individual Members and 6,200 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Facebook, Google, Hortonworks, HP, Huawei, IBM, Inspur, iSigma, LeaseWeb, Microsoft, ODPi, PhoenixNAP, Pivotal, Private Internet Access, Red Hat, Serenata Flowers, Target, WANdisco, and Yahoo. For more information, visit http://apache.org/ and https://twitter.com/TheASF

Media contact:
Sally Khudairi
Vice President
The Apache Software Foundation
Tel/WhatsApp +1 617 921 8656
press(at)apache(dot)org

# # #

© The Apache Software Foundation. "Apache", "Struts", "Apache Struts", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

Tuesday August 22, 2017

The Apache Software Foundation Announces Apache® MADlib™ as a Top-Level Project

Big Data machine-learning library used for scalable in-database analytics

Forest Hill, MD –22 August 2017– The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® MADlib™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.

Apache MADlib is a comprehensive library for scalable in-database analytics. It provides parallel implementations of machine learning, graph, mathematical and statistical methods for structured and unstructured data.

"Graduating as a Top-Level Project is a very important milestone for Apache MADlib," said Aaron Feng, Vice President of Apache MADlib. "During the incubation process, the MADlib community worked very hard to develop high quality software for in-database analytics, in an open and inclusive manner in accordance with the Apache Way."

MADlib grew out of discussions between database engine developers, data scientists, IT architects and academics interested in new approaches to scalable, sophisticated in-database analytics. These discussions were written up in a paper from VLDB 2009 [1] that coined the term "MAD Skills" for data analysis. The MADlib software project began the following year as a collaboration between researchers at UC Berkeley and engineers and computer scientists at Pivotal (formerly EMC/Greenplum). In September 2015, MADlib joined the ASF community as an incubating project.

MADlib is deployed on a wide variety of industry and academic projects across many different verticals, including automotive, consumer, finance, government, healthcare, and telecommunications.

"MADlib was conceived from the outset as an open-source meeting ground for software developers, computing researchers and data scientists to collaborate on scalable, in-database machine learning and statistics," said Joe Hellerstein, Professor of Computer Science at UC Berkeley, Co-Founder and Chief Strategy Officer at Trifacta, and one of the original authors of MADlib. "It has been great to witness the growth of the MADlib community and codebase as an ASF incubating project, and I look forward to this continuing as a Top-Level Project."

"At Pivotal, we have seen our customers successfully deploy MADlib on large scale data science projects across a wide variety of industry verticals," said Elisabeth Hendrickson, Vice President, R&D for Data at Pivotal. "As MADlib graduates to a Top-Level Project at the ASF, we anticipate increased adoption in the enterprise given the mature level of the codebase and the active developer community."

"The potential of the Apache MADlib project is unbounded," said Jim Jagielski, Vice Chairman of the ASF. "The ability to perform in-depth and detailed analytics, on both structured and unstructured data, using SQL enables MADlib to be applicable in scenarios where others simply can't compete. As not only interest in, but real-world usage of, machine learning becomes common place, MADlib joins the growing roster of Apache projects that define innovation."

"Apache MADlib is a great example of the diversity at Apache," said Ted Dunning, Apache MADlib Incubator Mentor and Member of the ASF Board of Directors. "MADlib does state-of-the-art machine learning, but does as an inherent part of a database. This is a radical approach that can provide important design flexibility. I am excited to see MADlib become a fully fledged project at Apache."

"New participants are more than welcome to join the project," added Feng. "We enthusiastically look forward to working together with all contributors to Apache MADlib in order to advance the state-of-the-art of scale-out data science tools."

[1] http://dl.acm.org/citation.cfm?id=1687576

Availability and Oversight
Apache MADlib software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache MADlib, visit http://madlib.apache.org/ and https://twitter.com/ApacheMADlib

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 650 individual Members and 6,200 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Facebook, Google, Hortonworks, HP, Huawei, IBM, Inspur, iSigma, LeaseWeb, Microsoft, ODPi, PhoenixNAP, Pivotal, Private Internet Access, Red Hat, Serenata Flowers, Target, WANdisco, and Yahoo. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "MADlib", "Apache MADlib", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Wednesday July 26, 2017

The Apache Software Foundation Announces Apache® Fluo™ as a Top-Level Project

Newest addition to Apache Big Data ecosystem used for continual, incremental processing of data at petabyte scale

Forest Hill, MD –26 July 2017– The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® Fluo™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.

Apache Fluo is a distributed system for incrementally processing large data sets stored in Apache Accumulo (the sorted, distributed key/value store based on Google's Bigtable, built on top of Apache Hadoop, Apache Zookeeper, and Apache Thrift). With Fluo, users can continuously join new data into large existing data sets without reprocessing all data. Unlike batch and streaming frameworks, Fluo offers much lower latency and can operate on extremely large data sets.

"I am very excited to see Apache Fluo graduate and I would like to thank our mentors for all their help, the Apache Incubator Project Management Committee for its advice and guidance, everyone in the Fluo community, and Google for publishing the research upon which Fluo is based," said Keith Turner, Vice President of Apache Fluo. "As a result of collaboration within the community, we are graduating with a beautifully designed piece of software."

Based on Percolator (built on top of Bigtable to support incremental updates to the search index at Google), Fluo makes it possible to continually-update the results of a large-scale computation, index, or analytic as new data is discovered.

"Apache Fluo is a very clever piece of software, elegantly supplementing Apache Accumulo's ability to store and maintain very large indexes," said Christopher Tubbs, ASF Member and Committer on Apache Accumulo and Apache Fluo. "Its support of transactions enables Accumulo to solve a whole new set of big data problems, and its observer framework makes designing ingest workflows fun."

An example of how Fluo works is a use case of counting phrases in unique documents. This could be accomplished by two MapReduce jobs: one job to get a unique set of documents and a following job to count phrases. Where petabytes of documents are concerned, running both jobs for a small amount of new data is inefficient. Apache Fluo enables continuous, quick computations of these two joins as new data arrives, constantly emitting deltas of phrase counts. Anything could consume the emitted deltas. For example, a query system could be continuously updated using them.

"We are excited that Fluo is becoming a Top-Level Project at the Apache Software Foundation," said Dr. Adina Crainiceanu, Apache Rya (incubating) Committer and Associate Professor, Computer Science Department, United States Naval Academy. "Heartfelt congratulations to the Fluo community for achieving this important milestone. The Apache Rya project uses the observer framework in Fluo to cache and maintain answers to complex SPARQL queries for large RDF datasets. Using cached answers greatly improves Rya's performance for complex queries. Fluo complements Rya by allowing the incremental and continuous update of the cached answers. Fluo is particularly useful because it allows updates to happen as new data is ingested, reduces updates latency, avoids stale results, and circumvents the periodical reprocessing of the entire dataset. We are confident that Apache Fluo will become one of the important frameworks for updating indexing results in a dynamic data-acquiring context."

"Fluo fulfills an important role in the Apache Hadoop ecosystem, significantly expanding existing capabilities for working with large data sets," said Billie Rinaldi, ASF Member and former Vice President of Apache Accumulo. "I was excited to see this project come to the Apache Incubator, and am even more pleased to see it graduate to a top-level Apache project."

"We welcome new users and contributors to Apache Fluo," added Turner. "If you are interested in trying Fluo, check out the Fluo Tour on the project Website. Join our mailing lists to discuss how Fluo may be a good solution for your problem, as well as for help with debugging and finding starter issues."

Catch Apache Fluo in action and meet members of the Fluo community at Accumulo Summit, 16 October 2017 in Columbia, MD. http://accumulosummit.com/

Availability and Oversight
Apache Fluo software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Fluo, visit http://fluo.apache.org/ and https://twitter.com/ApacheFluo

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 620 individual Members and 6,000 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, ODPi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, Target, WANdisco, and Yahoo. For more information, visit https://www.apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Fluo", "Apache Fluo", "Accumulo", "Apache Accumulo", "Rya", "Apache Rya", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Monday June 05, 2017

The Apache Software Foundation Announces Momentum With Apache® Hadoop® v2.8

Major release of the cornerstone of the Big Data ecosystem, from which dozens of Apache Big Data projects and countless industry solutions originate.

Forest Hill, MD —5 June 2017— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today momentum with Apache® Hadoop® v2.8, the latest version of the Open Source software framework for reliable, scalable, distributed computing.

Now ten years old, Apache Hadoop dominates the greater Big Data ecosystem as the flagship project and community amongst the ASF's more than three dozen projects in the category.

"Apache Hadoop 2.8 maintains the project's momentum in its stable release series," said Chris Douglas, Vice President of Apache Hadoop. "Our community of users, operators, testers, and developers continue to evolve the thriving Big Data ecosystem at the ASF. We're committed to sustaining the scalable, reliable, and secure platform our greater Hadoop community has built over the last decade."

Apache Hadoop supports processing and storage of extremely large data sets in a distributed computing environment. The project has been regularly lauded by industry analysts worldwide for driving market transformation. Forrester Research estimates that firms will spend US$800M in Hadoop software and related services in 2017. According to Zion Market Research, the global Hadoop market is expected to reach approximately US$87.14B by 2022, growing at a CAGR of around 50% between 2017 and 2022.

Apache Hadoop 2.8 is the result of 2 years of extensive collaborative development from the global Apache Hadoop community. With 2,914 commits as new features, improvements and bug fixes since v2.7, highlights include:
  • Several important security related enhancements, including Hadoop UI protection of Cross-Frame Scripting (XFS) which is an attack that combines malicious JavaScript with an iframe that loads a legitimate page in an effort to steal data from an unsuspecting user, and Hadoop REST API protection of Cross site request forgery (CSRF) attack which attempt to force an authenticated user to execute functionality without their knowledge.

  • Support for Microsoft Azure Data Lake as a source and destination of data. This benefits anyone deploying Hadoop in Microsoft's Azure Cloud. The Azure Data Lake service was actually developed for Hadoop and analytics workloads.

  • The "S3A" client for working with data stored in Amazon S3 has been radically enhanced for scalability, performance, and security. The performance enhancements were driven by Apache Hive and Apache Spark benchmarks. In Hive TCP-DS benchmarks, Apache Hadoop is currently faster working with columnar data stored in S3  than Amazon EMR's closed-source connector. This shows the benefit of collaborative Open Source development.

  • Several WebHDFS related enhancements include integrated CSRF prevention filter in WebHDFS, support OAuth2 in WebHDFS, disallow/allow snapshots via WebHDFS, and more.

  • Integration with other applications has been improved with a separate jar for the hdfs-client than the hadoop-hdfs JAR with all the server side code. Downstream projects that access HDFS can depend on the hadoop-hdfs-client module to reduce the amount of transitive classpath dependencies.

  • YARN NodeManager Resource Reconfiguration through RM Admin CLI for a live cluster that allows YARN clusters to have a more flexible resource model especially for a Cloud deployment.

In addition to physical Hadoop clusters, where the majority of storage and computation lies, Apache Hadoop is very popular within Cloud infrastructures. Contributions from Apache Hadoop's diverse community includes improvements provided by Cloud infrastructure vendors and large Hadoop-in-Cloud users. These improvements include: Azure and S3 storage and YARN reconfiguration in particular, improve Hadoop's deployment on and integration with Cloud Infrastructures. The improvements in Hadoop 2.8 enable Cloud-deployed clusters to be more dynamic in sizing, adapting to demand by scaling up and down.

"My colleagues and I are happy that tests of Apache Hive and Hadoop 2.8 show that we are able to provide a similar experience reading data in from S3 as Amazon EMR, with its closed-source fork/rewrite of S3," said Steve Loughran, member of the Apache Hadoop Project Management Committee.

Hailed as a "Swiss army knife of the 21st century" by the Media Guardian Innovation Awards  and "the most important software you’ve never heard of…helped enable both Big Data and Cloud computing" by author Thomas Friedman, Apache Hadoop is used by an array of companies such as Alibaba, Amazon Web Services, AOL, Apple, eBay, Facebook, foursquare, IBM, HP, LinkedIn, Microsoft, Netflix, The New York Times, Rackspace, SAP,  Tencent, Teradata, Tesla Motors, Uber, and Twitter. Yahoo, an early pioneer, hosts the world's largest known Hadoop production environment to date, spanning more than 38,000 nodes.

Catch Apache Hadoop in action at DataWorks Summit 13-15 June 2017 in San Jose, CA.

Availability and Oversight
Apache Hadoop software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Hadoop, visit http://hadoop.apache.org/ and https://twitter.com/hadoop

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 680 individual Members and 6,000 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, ODPi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, Target, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Hadoop", "Apache Hadoop", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Calendar

Search

Hot Blogs (today's hits)

Tag Cloud

Categories

Feeds

Links

Navigation