Entries tagged [framework]

Tuesday May 21, 2019

The Apache Software Foundation Announces Apache® Dubbo™ as a Top-Level Project

Popular Open Source Remote Procedure Call framework in use at dozens of companies, including Alibaba Group, China Life, China Telecom, Dangdang, Didi Chuxing, Haier, and Industrial and Commercial Bank of China, among others.

Wakefield, MA —20 May 2019— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today Apache® Dubbo™ as a Top-Level Project (TLP).

Apache Dubbo is a high-performance, Java based RPC (remote procedure call) framework. The project was originally developed at Alibaba and open-sourced in 2011, and entered the Apache Incubator in February 2018.

"This day is not only the success of Apache Dubbo project itself, but also another success of The Apache Way," said Ian Luo, Vice President of Apache Dubbo. "Back to the time when Dubbo started to incubate at The Apache Software Foundation, there was a small number of initial committers to the project, but today the number of the Dubbo committers has increased by five times, and we are proud of having far more contributors in this project by now. It is indeed a great journey."

The Dubbo framework specifies the methods that can be called remotely across distributed and microservice systems. Its primary functionalities are: interface based remote call; fault tolerance and load balancing; and automatic service registration and discovery. Features include:

  • Transparent interface based RPC – provides high performance interface based RPC, which is transparent to users.

  • Intelligent load balancing – supports multiple load balancing strategies out of the box, which perceives downstream service status to reduce overall latency and improve system throughput.

  • Automatic service registration and discovery – supports multiple service registries that can detect service online/offline instantly.

  • High extensibility – micro-kernel and plugin design ensures that it can easily be extended by third party implementation across core features like protocol, transport, and serialization.

  • Runtime traffic routing – can be configured at runtime so that traffic can be routed according to different rules, which makes it easy to support features such as blue-green deployment, data center aware routing, etc.

  • Visualized service governance – provides rich tools for service governance and maintenance such as querying service metadata, health status, and statistics.

Apache Dubbo is in use at more than 150 companies, including Alibaba Group, China Life, China Telecom, Dangdang, Didi Chuxing, Haier, Industrial and Commercial Bank of China, NetEase, Qunar, and Youzan, among others.

"Apache Dubbo is one of the most highly visible projects that was open-sourced by Alibaba," said Jiangwei Jiang, Principal Engineer at Alibaba Cloud Intelligence. "Dubbo is widely used in Alibaba and many other companies. It is one of the best designed Open Source frameworks to develop microservices with high-throughput, complicated business logic, and sophisticated governance."

"Congratulations to Dubbo for graduating from the Apache Incubator! Apache Dubbo, as a high-performance service governance framework, plays a crucial role in Didi's evolution of technical architecture," said Donghai Shi, Senior Technical Director at Didi Chuxing. "With the rapid business growth, our application services and business logic grows more and more complicated, we have been facing more and more challenges in R&D efficiency and business stability. Many thanks to Apache Dubbo, which serves as a strong support for our service governance. Based on Apache Dubbo's service governance framework, we can be more practical and determined in building business with microservices."

"Congratulations on Dubbo's promotion to an Apache Top-Level Project. As a core component of service, Dubbo has a profound impact and is one of the best choices for service architecture," said Xiaofan Yu, Architecture of NetEase Cloud Music. "We have learned a lot from Dubbo's design and implementation. I believe that it can develop in a more quick and stable way after graduation and become the cornerstone of future microservice architecture."

“Congratulations to Dubbo for becoming an Apache Top-Level Project," said Ruimin Jin, Technical Director at Youzan Cloud. "Youzan's large-scale microservice cluster is built on Apache Dubbo. Dubbo's outstanding features, flexible design, and extensive experience in the community has helped us to build distributed systems quickly. In the past three years, we have done a lot of service governance based on Dubbo and achieved very good business results. I hope that Apache Dubbo will have more achievements in multi-language in the future, and the Open Source community can also contribute more plugins to Dubbo."

"Qunar.com first chose Apache Dubbo as service infrastructure in 2012 --doing so helped us take fewer detours in the framework selection when we deployed it at scale," said Zhaohui Yu, Former Senior Director, Infrastructure Department, Qunar.com. "Our team was happy to participate in the community by fixing bugs and contributing new features over the years. Congratulations to Dubbo for graduating as an Apache Top-Level Project: it is a great honor for all Dubbo users."

"Congratulations to Dubbo graduating from the Apache Incubator. As a very popular SOA framework, Apache Dubbo forms part of our infrastructure layer to support almost all different business units within Guazi.com," said Haozhi Liu, Principal Architect at Guazi.com. "It also plays an increasingly important role along with our technical transition from PHP to Java. Furthermore, during our process of adopting Apache Dubbo, both the community and Project Management Committee have been very helpful. We truly believe that with this strong community support, Apache Dubbo project will continue to be successful in the future."

"It's both thrilling and an honor to participate in the development of Apache Dubbo," added Luo. "Our journey continues through the excellent efforts of the greater Apache Dubbo community. We welcome additional contributions to the source code, features, documentation, and discussions on the mailing list."

Catch Apache Dubbo in action at the Apache Dubbo MeetUp series (26 May/Beijing, 6 July/Shenzhen, and 17 August/Shanghai), ApacheCon North America (9-12 September/Las Vegas), and ApacheCon Europe (22-24 October/Berlin).

Availability and Oversight
Apache Dubbo software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Dubbo, visit http://dubbo.apache.org/ and https://twitter.com/ApacheDubbo

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects seeking to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects that provide $20B+ worth of Apache Open Source software to the public at 100% no cost. Through the ASF's merit-based process known as "The Apache Way," more than 730 individual Members and 7,000 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting billions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Alibaba Cloud Computing, Anonymous, ARM, Baidu, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, Leaseweb, Microsoft, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, Tencent, Union Investment, Workday, and Verizon Media. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Dubbo", "Apache Dubbo", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Wednesday April 24, 2019

The Apache Software Foundation Announces Apache® NetBeans™ as a Top-Level Project

Popular, award-winning Open Source development environment, tooling platform, and application framework enables Java programmers to easily build desktop, mobile, and Web applications

Wakefield, MA —24 April 2019— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today Apache® NetBeans™ as a Top-Level Project (TLP).

Apache NetBeans is an Open Source development environment, tooling platform, and application framework that enables Java programmers to build desktop, mobile, and Web applications. The project was originally developed as part of a student project in 1996, was acquired and open-sourced by Sun Microsystems in 2000, and became part of Oracle when it acquired Sun Microsystems in 2010. NetBeans was submitted to the Apache Incubator in October 2016.

"Being part of the ASF means that NetBeans is now not only free and Open Source software: it is also, uniquely, and for the first time, part of a foundation specifically focused on enabling open governance," said Geertjan Wielenga, Vice President of Apache NetBeans. "Every contributor to the project now has equal say over the roadmap and direction of NetBeans. That is a new and historic step and the community has been ready for this for a very long time. Thanks to the strong stewardship of NetBeans in Sun Microsystems and Oracle, Apache NetBeans is now ready for the next phase in its development and we welcome everyone to participate as equals as we move forward."

Apache NetBeans 11.0 was released on 4 April 2019, and is the project’s third major release since entering the Apache Incubator. The project has most recently won the 2018 Duke's Choice Award, a well established industry award in the Java ecosystem.

"'Have a patch for NetBeans? Then create a pull request for Apache NetBeans!' I love how that sounds," said Jaroslav Tulach, original founder and architect of NetBeans. "I am really glad the transition has gone so well and that 'my NetBeans' has turned into a full-featured project at The Apache Software Foundation."

"From the moment that I first evaluated NetBeans for use in my courses at Dawson College and Concordia University, I recognized that it was a unique tool. In the years that followed, it has never disappointed me as the best tool for education. Now, I am even more excited about using it as it becomes a top-level project in the Apache Software Foundation," said Ken Fogel, Chairperson of Computer Science Technology at Dawson College, Montreal. "A lot of amazing developers from around the world have contributed to making NetBeans a first-class tool worthy of being under The Apache Software Foundation. Now, more than ever, its continued evolution will be faster, more responsive to the needs of the development community, and ever more open to the participation of the community. I am proud to have had a very small part in its development and I am excited to see how it will grow and evolve going forward."

By becoming an Apache project, NetBeans is benefiting from being enabled to receive more contributions from around the world. For example, large companies are using NetBeans as an application framework to build internal or commercial applications and are much more likely to contribute to NetBeans with it being part of the ASF than as part of a commercial enterprise. At the same time, individual contributors from Oracle continue to work on Apache NetBeans in its new home, as part of the worldwide community of individual contributors, both self-employed as well as from other organizations.

"Apache is the perfect home for NetBeans, allowing its long tail of historic contributors to stay involved while also launching another stage in its evolution for newcomers," said Simon Phipps, current President of the Open Source Initiative. "As a member of the new Apache NetBeans Project Management Committee, I look forward to helping in any way I can and I encourage the whole Java family to do so too."

"I've used NetBeans since I first started learning Java over 15 years ago," said Neil C. Smith, creator of PraxisLIVE. "It remains my tool of choice. It's great to be part of the Apache community and helping it to thrive. But NetBeans is more than just a development environment, it's also a powerful platform for building other business and development tools. It forms the backbone of PraxisLIVE, which I have created and continue developing on top of Apache NetBeans, powering a hybrid visual Smalltalk-like IDE for the underlying live programmable Java actor system". 

"I am an avid NetBeans user, since my first experience in about 2008. The most important aspect is, quoting Java EE guru Adam Bien: ‘It always works’," said Pieter van den Hombergh, lecturer at Fontys Venlo University of Applied Sciences. "This is particularly important in my job and to my audience: I teach Java, as well as, occasionally, PHP. Now that NetBeans has gone through the hard work of the transfer from Oracle to Apache, I am glad to see it increasingly becoming complete again. I am certain to enjoy using the up to date version with Java 11+, JUnit 5 integration, and all the other goodies, either built-in or provided by the many useful plugins."

"The flip side of freedom is responsibility," added Wielenga. "Now that the community finally has what’s its been asking for for so many years, it needs to step up and take ownership of Apache NetBeans. Each and every user of Apache NetBeans now has the ability to ask themselves where they can best fit in to drive the project forward -- from evaluating bugs, to reviewing pull requests, to tweaking the documentation, to verifying tutorials, to helping answer questions on the mailing lists, or sharing tips and insights on Twitter. Lack of Java knowledge and even lack of programming knowledge is no excuse; there’s really something to do for everyone with any skill or interest level. There is no need nor excuse to stand on the sidelines anymore -- NetBeans is now yours, exactly as much as you want it to be."

Catch Apache NetBeans in action at conferences all over the world. Users are welcome to set up and host their own Apache NetBeans events, such as the annual Apache NetBeans Day UK, which will be held 27 September 2019, in London.

Availability and Oversight
Apache NetBeans software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache NetBeans, visit http://netbeans.apache.org/ and https://twitter.com/netbeans

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects seeking to become an Apache project or initiative enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects that provide $20B+ worth of Apache Open Source software to the public at 100% no cost. Through the ASF's merit-based process known as "The Apache Way," more than 730 individual Members and 7,000 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting billions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Alibaba Cloud Computing, Anonymous, ARM, Baidu, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, Leaseweb, Microsoft, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, Tencent, Union Investment, Workday, and Verizon Media. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "NetBeans", "Apache NetBeans", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Wednesday January 23, 2019

The Apache Software Foundation Announces Apache® Hadoop® v3.2.0

Pioneering Open Source distributed enterprise framework powers US$166B Big Data ecosystem

Wakefield, MA —23 January 2019— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, today announced Apache® Hadoop® v3.2.0, the latest version of the Open Source software framework for reliable, scalable, distributed computing.

Now in its 11th year, Apache Hadoop is the foundation of the US$166B Big Data ecosystem (source: IDC) by enabling data applications to run and be managed on large hardware clusters in a distributed computing environment. "Apache Hadoop has been at the center of this big data transformation, providing an ecosystem with tools for businesses to store and process data on a scale that was unheard of several years ago," according to Accenture Technology Labs.

"This latest release unlocks the powerful feature set the Apache Hadoop community has been working on for more than nine months," said Vinod Kumar Vavilapalli, Vice President of Apache Hadoop. "It further diversifies the platform by building on the cloud connector enhancements from Apache Hadoop 3.0.0 and opening it up for deep learning use-cases and long-running apps."

Apache Hadoop 3.2.0 highlights include:
  • ABFS Filesystem connector —supports the latest Azure Datalake Gen2 Storage;
  • Enhanced S3A connector —including better resilience to throttled AWS S3 and DynamoDB IO;
  • Node Attributes Support in YARN —helps to tag multiple labels on the nodes based on its attributes and supports placing the containers based on expression of these labels;
  • Storage Policy Satisfier  —supports HDFS (Hadoop Distributed File System) applications to move the blocks between storage types as they set the storage policies on files/directories; 
  • Hadoop Submarine —enables data engineers to easily develop, train and deploy deep learning models (in TensorFlow) on very same Hadoop YARN cluster;
  • C++ HDFS client —helps to do async IO to HDFS which helps downstream projects such as Apache ORC;
  • Upgrades for long running services —supports in-place seamless upgrades of long running containers via YARN Native Service API (application program interface) and CLI (command-line interface).

"This is one of the biggest releases in Apache Hadoop 3.x line which brings many new features and over 1,000 changes," said Sunil Govindan, Apache Hadoop 3.2.0 release manager. "We are pleased to announce that Apache Hadoop 3.2.0 is available to take your data management requirements to the next level. Thanks to all our contributors who helped to make this release happen."

Apache Hadoop is widely deployed at numerous enterprises and institutions worldwide, such as Adobe, Alibaba, Amazon Web Services, AOL, Apple, Capital One, Cloudera, Cornell University, eBay, ESA Calvalus satellite mission, Facebook, foursquare, Google, Hortonworks, HP, Huawei, Hulu, IBM, Intel, LinkedIn, Microsoft, Netflix, The New York Times, Rackspace, Rakuten, SAP, Tencent, Teradata, Tesla Motors, Twitter, Uber, and Yahoo. The project maintains a list of educational and production users, as well as companies that offer Hadoop-related services at https://wiki.apache.org/hadoop/PoweredBy

Global Knowledge hails, "...the open-source Apache Hadoop platform changes the economics and dynamics of large-scale data analytics due to its scalability, cost effectiveness, flexibility, and built-in fault tolerance. It makes possible the massive parallel computing that today's data analysis requires."

Hadoop is proven at scale: Netflix captures 500+B daily events using Apache Hadoop. Twitter uses Apache Hadoop to handle 5B+ sessions a day in real time. Twitter’s 10,000+ node cluster processes and analyzes more than a zettabyte of raw data through 200B+ tweets per year. Facebook’s cluster of 4,000+ machines that store 300+ petabytes is augmented by 4 new petabytes of data generated each day. Microsoft uses Apache Hadoop YARN to run the internal Cosmos data lake, which operates over hundreds of thousands of nodes and manages billions of containers per day.

Transparency Market Research recently reported that the global Hadoop market is anticipated to rise at a staggering 29% CAGR with a market valuation of US$37.7B by the end of 2023.

Apache Hadoop remains one of the most active projects at the ASF: it ranks #1 for Apache project repositories by code commits, and is the #5 repository by size (3,881,797 lines of code).

"The Apache Hadoop community continues to go from strength to strength in further driving innovation in Big Data," added Vavilapalli. "We hope that developers, operators and users leverage our latest release in fulfilling their data management needs."

Catch Apache Hadoop in action at the Strata conference, 25-28 March 2019 in San Francisco, and dozens of Hadoop MeetUps held around the world, including on 30 January 2019 at LinkedIn in Sunnyvale, California.

Availability and Oversight
Apache Hadoop software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Hadoop, visit http://hadoop.apache.org/ and https://twitter.com/hadoop

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 7,000 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official global conference series. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Alibaba Cloud Computing, Anonymous, ARM, Baidu, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, LeaseWeb, Microsoft, Oath, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, Tencent, and Union Investment. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Hadoop", "Apache Hadoop", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Wednesday October 24, 2018

The Apache Software Foundation Announces Apache® ServiceComb™ as a Top-Level Project

Open Source microservices framework in use at CeeWa Intelligent Technology, Huawei Cloud, iSoftStone, itcast, MedSci Medicine, Pactera, PICC, Tongji University, among others.

Wakefield, MA —24 October 2018— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today Apache® ServiceComb™ as a Top-Level Project (TLP).

Apache ServiceComb is an Open Source microservices software framework that enables developers to easily build and manage microservices-based applications efficiently and conveniently. The project was originally developed at Huawei and was donated to the Apache Incubator in November 2017.

"We are very proud that ServiceComb has arrived at this important milestone," said Willem Jiang, Vice President of Apache ServiceComb. “ServiceComb has evolved from a microservices software development kit to a full microservices solution in less than a year. While incubating in Apache, the number of ServiceComb users grew rapidly, and new developers are constantly coming in. It is amazing to grow at such a high rate."

As a one-stop microservices solution, Apache ServiceComb contains 3 sub-projects:
  1. Java-Chassis - an out-of-the-box Java microservices SDK that includes four parts: service contract, programming model, running model and communication model, with a complete set of microservices governance abilities such as load balancing, fallback, rate limiting, and call stack tracing. Microservices governance and business logic are isolated.

  2. Service-Center - a high-performance, highly available, stateless Golang implementation of the Service Discovery and Registration Center based on Etcd, which provides real-time service instance registration, real-time service instance notification, and inter-service testing based on contract.

  3. Saga - provides an eventual consistency solution for distributed transactions which could be a pain point of microservices.

Apache ServiceComb's highlights include:
  • Asynchronous kernel - both synchronous and asynchronous programming models based on VertX effectively ensures high performance and low latency, both in traditional enterprise applications or in emerging businesses such as e-commerce, Internet, and IoT, to avoid avalanche effect when reaching peak loads.

  • Out-of-the-box experience - developers using the scaffolding website start.servicecomb.io, can launch microservices-based projects with integrated service registration, discovery, communication and microservices governance capabilities, and centralized configuration by default.

  • Open API - automatic code generation and isolating logic code from governance streamlines the DevOps pipeline, enabling different teams to efficiently and independently develop and manage code, test, and document using bidirectional contract files and OpenAPI.

Apache ServiceComb is in use at dozens of organizations, including CeeWa Intelligent Technology, Huawei Cloud, iSoftStone, itcast, MedSci Medicine, Pactera, PICC, and Tongji University, among others.

"In 2015, Huawei Cloud launched microservices-related services, and this is the original code base of ServiceComb," said Liao Zhenqin, General Manager of Huawei Cloud PaaS Product Department. "Apache ServiceComb is the core of Huawei Cloud microservices engine CSE. It is the defacto standard at Huawei, and is widely used on many major products, including Huawei Consumer Cloud, Huawei Cloud Core, Huawei EI, among others. We are very happy to see ServiceComb's rapid progress at in the Apache Incubator, and encourage more engineers to continue to accept and contribute to Open Source by becoming a part of the Apache Software Foundation volunteer community."

Huawei Consumer Cloud depends on Apache ServiceComb's high performance, low latency, and asynchronous technology to implement a 1,500+ node scale microservices that supports 400 million online mobile phone users. Using ServiceComb, the queries-per-second more than doubled, while reducing latency by 45%.

"We use Apache ServiceComb to build our 'intelligent brain' for drone control. ServiceComb is an out-of-the-box microservices solution, which provides the microservices governance abilities without any coding," said Zhou Sujian,  Chief Architect of CeeWa Intelligent Technology. "Compared with using or implementing a traditional RPC framework, a lot of development resources are saved. With ServiceComb, both the team development and the node deployment efficiency are doubled, which are very exciting. We are also very happy to see ServiceCombs work on integrating Open Source distributed tracing systems such as Apache Zipkin, Apache SkyWalking and Prometheus, which greatly improved our cross-node chain tracing ability, and the team's efficiency to locate and solve problems."

"As microservices architecture is not a single-point technical issue, we need to response the rapid change of technology, organization, and processes flow," said Bao Yongwei, Vice President of Product Engineering Center at iSoftstone Smart City Technology. "Apache ServiceComb Java-Chassis does a good job, its core is implemented entirely on service contract which is based on OpenAPI that can help us automatically generate service skeleton code. This allows our teams to smoothly integrate our Smart City business system into microservices. We are very happy to see that our employees actively participate in the ServiceComb project and learned the Apache Way of open development with the Apache community. Apache ServiceComb is a star project, we strongly believe that participating in the ServiceComb community will help improve our software engineers' abilities."

"Apache ServiceComb has a solid community and a comprehensive technology background. The project's commitment to making it easier for enterprises to embrace microservices and cloud computing is impressive," said Yu Yang, Dean of itcast Institute. "Itcast selected ServiceComb as a microservices technology teaching material for education and training based on its good concepts on microservices design, excellent technical practice and perfect community documentation."

"Graduating as an Apache Top-Level Project demonstrates that all contributors have a place with Apache ServiceComb, whether they were part of the project before it arrived at the Apache Incubator or joined the community during the incubation process," added Jiang. "It is a pleasure to collaborate with volunteers in this open, equal, and diverse environment. We welcome new ServiceComb contributors to help with code development,  evangelizing on innovations in microservices, promoting community development 'the Apache Way', and other ways of participating."

Availability and Oversight
Apache ServiceComb software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache ServiceComb, visit http://servicecomb.apache.org/ and https://twitter.com/ServiceComb

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 6,800 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Anonymous, ARM, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, LeaseWeb, Microsoft, Oath, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, and Union Investment. For more information, visit https://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "ServiceComb", "Apache ServiceComb", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Wednesday May 23, 2018

The Apache Software Foundation Announces Apache® Wicket(TM) v8.0.0

Open Source component-oriented Web framework used by the Brazilian Air Force, Emirates Real Estate Investment Trust, German National Library of Science and Technology, Japan National Police Agency, Norwegian Ministry of Foreign Affairs, Orange Moldova, Savings Banks Group Finland, Taiwan High Speed Rail, and Topicus B.V., among other organizations.

Wakefield, MA —23 May 2018— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today Apache® WicketTM v8.0.0, the component-oriented Web framework.

Apache Wicket is a popular component-oriented server-side Java Web framework used to build complex Web applications reaping the benefits of object oriented programming such as reusability, encapsulation and easy extensibility. With the tagline of "Write less, achieve more," the latest major release of Apache Wicket aims to help developers write even more robust, maintainable, and highly performant Web applications and Websites for governments, stores, universities, cities, banks, email providers, and more.

"Apache Wicket 8's flagship feature, support for Java 8 idioms, started off a few years ago, and allows for a really great development experience where you can achieve the same functionality in a more secure, readable way," said Martijn Dashorst, Vice President of Apache Wicket. "I think our users are going to be very happy with the benefits it brings."

Apache Wicket was initially developed in 2004 and joined The Apache Software Foundation in 2007. The project is one of the few survivors of the "Java server-side Web framework wars" of the mid 2000's —with a robust history and growing user base over the past decade, Apache Wicket remains a premier choice for Java developers across the world.

Using Apache Wicket, developers are able to build custom components easily, using normal Java idioms for extensibility and encapsulation. Wicket gives developers the ability to create complex user interfaces using just Java and HTML, and keep the applications secure and maintainable. Apache Wicket abstracts the request oriented Web technologies away and provides user interface concepts to the Java developer. So instead of thinking "requests" and "responses", developers using Wicket think "Pages", "Panels", "Buttons", "Links", "Forms" and "ListViews".

Apache Wicket 8.0.0 includes new features, bug fixes and improvements that include:
  • Java 8 or newer is required
  • Servlet 3.1 is required
  • Component databinding (IModel) is now a Functional Interface
  • Native JSR 310 Date/Time support
  • Java's Optional type is used in the right places
  • Removal of many deprecated features from previous versions

The full list of new features and changes is available in the project release notes at https://wicket.apache.org/2018/05/22/wicket-8-released.html

"Wicket 8 is a long awaited milestone for the project," said Andrea del Bene, Apache Wicket committer and Apache Wicket v8.0 Release Manager. "We are proud to provide all the new functionality to Web developers who can leverage Java 8 to remove many lines of code throughout their code bases. The new features are essential for modern Java developers. With Wicket 8, developers can create more maintainable and better performant applications."

Apache Wicket is used by thousands of organizations that include Åland Islands Library, Brazilian Air Force, Emirates Real Estate Investment Trust, German National Library of Science and Technology, India Goa Directorate of Agriculture, Japan National Police Agency, Norwegian Ministry of Foreign Affairs, Orange Moldova, Pune Smart City, RiskCo, Savings Banks Group Finland, Sweden's Helge Library, Taiwan High Speed Rail, and Topicus B.V., among others.

"Apache Wicket always had security as one of its pillar stones. This is why we know our access management solution Topicus KeyHub has a solid foundation with Wicket," said Martijn Maatman, manager at Topicus B.V. "Wicket 8 contains new and improved features to increase security and effectiveness. This makes it our framework of choice to build software solutions that can stand the test of time."

"Apache Wicket's focus on plain Java and HTML enabled OpenMeetings to migrate from Flash to a maintainable codebase," said Maxim Solodovnik, VP of Apache OpenMeetings™ Web conferencing project. "The stability and modularity of the Wicket framework gives us the assurance our investment is not obsolete in a couple of weeks or months."

"Wicket comes with a great user guide, and with our quick-start wizard you can have your first Wicket project up and running in seconds," added Dashorst.

Catch Apache Wicket in action with live examples on the Apache Wicket Website, as well as many available recordings of presentations from JavaZone, Devoxx, and ApacheCon.

Availability and Oversight
Apache Wicket software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Wicket, visit http://wicket.apache.org/ and https://twitter.com/apache_wicket 

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server —the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 6,800 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Anonymous, ARM, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Hortonworks, Huawei, IBM, Indeed, Inspur, LeaseWeb, Microsoft, Oath, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, and Union Investment. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Wicket", "Apache Wicket", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Thursday December 14, 2017

The Apache Software Foundation Announces Apache® Hadoop® v3.0.0 General Availability

Ubiquitous Open Source enterprise framework maintains decade-long leading role in $100B annual Big Data market

Forest Hill, MD —14 December 2017— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, today announced Apache® Hadoop® v3.0.0, the latest version of the Open Source software framework for reliable, scalable, distributed computing.

Over the past decade, Apache Hadoop has become ubiquitous within the greater Big Data ecosystem by enabling firms to run and manage data applications on large hardware clusters in a distributed computing environment.

"This latest release unlocks several years of development from the Apache community," said Chris Douglas, Vice President of Apache Hadoop. "The platform continues to evolve with hardware trends and to accommodate new workloads beyond batch analytics, particularly real-time queries and long-running services. At the same time, our Open Source contributors have adapted Apache Hadoop to a wide range of deployment environments, including the Cloud."

"Hadoop 3 is a major milestone for the project, and our biggest release ever," said Andrew Wang, Apache Hadoop 3 release manager. "It represents the combined efforts of hundreds of contributors over the five years since Hadoop 2. I'm looking forward to how our users will benefit from new features in the release that improve the efficiency, scalability, and reliability of the platform."

Apache Hadoop 3.0.0 highlights include:
  • HDFS erasure coding —halves the storage cost of HDFS while also improving data durability;
  • YARN Timeline Service v.2 (preview) —improves the scalability, reliability, and usability of the Timeline Service;
  • YARN resource types —enables scheduling of additional resources, such as disks and GPUs, for better integration with machine learning and container workloads;
  • Federation of YARN and HDFS subclusters transparently scales Hadoop to tens of thousands of machines;
  • Opportunistic container execution improves resource utilization and increases task throughput for short-lived containers. In addition to its traditional, central scheduler, YARN also supports distributed scheduling of opportunistic containers; and 
  • Improved capabilities and performance improvements for cloud storage systems such as Amazon S3 (S3Guard), Microsoft Azure Data Lake, and Aliyun Object Storage System.

Hadoop 3.0.0 has already undergone extensive testing and integration with the broader Open Source ecosystem at The Apache Software Foundation. With this release, its community of developers and users promote this release series out of beta.

Apache Hadoop is widely deployed at numerous enterprises and institutions worldwide, such as Adobe, Alibaba, Amazon Web Services, AOL, Apple, Capital One, Cloudera, Cornell University, eBay, ESA Calvalus satellite mission, Facebook, foursquare, Google, Hortonworks, HP, Hulu, IBM, Intel, LinkedIn, Microsoft, Netflix, The New York Times, Rackspace, Rakuten, SAP, Tencent, Teradata, Tesla Motors, Twitter, Uber, and Yahoo. The project maintains a list of known users at https://wiki.apache.org/hadoop/PoweredBy

"It's tremendous to see this significant progress, from the raw tool of eleven years ago, to the mature software in today's release," said Doug Cutting, original co-creator of Apache Hadoop. "With this milestone, Hadoop better meets the requirements of its growing role in enterprise data systems.  The Open Source community continues to respond to industrial demands."

Apache Hadoop's diverse community enjoys continued growth amongst the ASF's most active projects, and remains at the forefront of more than three dozen Apache Big Data projects.

Apache Hadoop committer history

Apache Hadoop has received countless awards, including top prizes at the Media Guardian Innovation Awards and Duke's Choice Awards, and has been hailed by industry analysts:

"...the lifeblood of organizational analytics…" —Gartner

"Hadoop Is Here To Stay" —Forrester

"...today Hadoop is the only cost-sensible and scalable open source alternative to commercially available Big Data management packages. It also becomes an integral part of almost any commercially available Big Data solution and de-facto industry standard for business intelligence (BI)." —MarketAnalysis.com/Market Research Media

"...commanding half of big data’s $100 billion annual market value...Hadoop is the go-to big data framework." —BigDataWeek.com

"Hadoop, and its associated tools, is currently the 'big beast' of the big data world and the Hadoop environment is undergoing rapid development..." —Bloor Research


"The opportunity to effect meaningful, even fundamental change in the Apache Hadoop project remains open," added Douglas. "Our new contributors uprooted the project from its historical strength in Web-scale analytics by introducing powerful, proven abstractions for data management, security, containerization, and isolation. Apache Hadoop drives innovation in Big Data by growing its community. We hope this latest release continues to draw developers, operators, and users to the ASF."

Catch Apache Hadoop in action at the Strata Data Conference in San Jose, CA, 5-8 March 2018, and at dozens of Hadoop Meetups held around the world.

Availability and Oversight
Apache Hadoop software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Hadoop, visit http://hadoop.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server —the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 680 individual Members and 6,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Facebook, Google, Hortonworks, Huawei, IBM, Inspur, iSIGMA, ODPi, LeaseWeb, Microsoft, PhoenixNAP, Pivotal, Private Internet Access, Red Hat, Serenata Flowers, Target, Union Investment, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Hadoop", "Apache Hadoop", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Tuesday October 31, 2017

The Apache Software Foundation Announces Apache® Juneau™ as a Top-Level Project

Open Source framework for quickly and easily creating Java-based REST microservices and APIs in use at IBM, The Open Group, and Salesforce, among others.

Forest Hill, MD –31 October 2017– The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® Juneau™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.

Apache Juneau is a cohesive framework that allows developers to marshal POJOs (Plain Old Java Objects) and develop REST (Representational State Transfer) microservices and APIs. Marshalling is used to transform an object’s memory representation to a data format suitable for moving between different parts of a computer program (or across programs), and to simplify communications to remote objects with an object.

"We've worked hard on making the Apache Juneau code as simple and easy to use as possible," said James Bognar, Vice President of Apache Juneau. "We packed Juneau with rich features and functionality, and have successfully directed our efforts on building a diverse community that will help drive the project’s future. We’re very proud to graduate as an Apache Top-Level Project."

Apache Juneau consists of:

  1. A universal toolkit for marshalling POJOs to a wide variety of content types using a common cohesive framework;
  2. A universal REST server API for creating self-documenting REST interfaces using POJOs, simply deployed as one or more top-level servlets in any Servlet 3.1.0+ container;
  3. A universal REST client API for interacting with Juneau or 3rd-party REST interfaces using POJOs and proxy interfaces; and
  4. A REST microservice API that combines all the features above with a simple configurable Jetty server for creating lightweight standalone REST interfaces that start up in milliseconds.


Apache Juneau is in use at IBM, The Open Group, and Salesforce, among others. The Apache Streams project began incorporating Apache Juneau libraries in late 2016.

"Removing Dropwizard and Jackson in favor of Apache Juneau simplified our dependency tree, increased the performance of our APIs, and added several features, especially HTML rendering, that have been a huge hit," said Steve Blackmon, Vice President of Apache Streams. "An on-going collaboration between our projects continues to expand the capabilities of Juneau's Remoteable library. As Apache Streams adds additional data provider Java SDKs powered by Juneau, the variety of HTTP interfaces that can be modeled and integrated with Juneau has expanded."

"We were able to replace existing home-grown REST interfaces on top of EMF objects with ones based on Apache Juneau and dramatically reduced the size of our codebase," said Craig Chaney, former Jazz Repository team lead at IBM. "We also used it as the basis for our Docker-based microservices in our CLM-as-a-Service offering."

"I have used Apache Juneau on projects where I need to work with Web Services," said David Goddard, Executive IT Specialist at IBM. "Juneau has saved us many development hours, enabling me to easily consume third-party REST APIs and construct my own Web Services far more quickly than I would otherwise be able to. Juneau also aids the development of robust, maintainable applications with clear logical code structure."

"When The Apache Software Foundation moved the JSON.org license to Category X, successors for JSON processing were needed," said John D. Ament, Vice President of the Apache Incubator, and Apache Juneau incubation mentor. "Apache Juneau was identified as a clean solution. It provides an easy to use API, great performance and a large number of features that made it a strong recommendation for others to leverage."

"As Apache Juneau grows, we welcome new contributors to join the project and take an active role in its development," added Bognar. "Whether reviewing user code, helping with feedback, or contributing code changes through the mailing list, we look forward to learning more about usage patterns to further improve the product."

Meet members of the Apache Juneau community at the Salesforce Dreamforce 2017 conference 6-9 November 2017 in San Francisco.

Availability and Oversight
Apache Juneau software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Juneau, visit http://juneau.apache.org/ and https://twitter.com/ApacheJuneau

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 680 individual Members and 6,300 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Facebook, Google, Hewlett Packard, Hortonworks, Huawei, IBM, Inspur, iSIGMA, ODPi, LeaseWeb, Microsoft, PhoenixNAP, Pivotal, Private Internet Access, Red Hat, Serenata Flowers, Target, WANdisco, and Yahoo. For more information, visit http://apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Juneau", "Apache Juneau", "Streams", "Apache Streams", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Saturday September 09, 2017

Apache Struts Statement on Equifax Security Breach

UPDATE: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit

The Apache Struts Project Management Committee (PMC) would like to comment on the Equifax security breach, its relation to the Apache Struts Web Framework and associated media coverage.

We are sorry to hear news that Equifax suffered from a security breach and information disclosure incident that was potentially carried out by exploiting a vulnerability in the Apache Struts Web Framework. At this point in time it is not clear which Struts vulnerability would have been utilized, if any. In an online article published on Quartz.com [1], the assumption was made that the breach could be related to CVE-2017-9805, which was publicly announced on 2017-09-04 [2] along with new Struts Framework software releases to patch this and other vulnerabilities [3][4]. However, the security breach was already detected in July [5], which means that the attackers either used an earlier announced vulnerability on an unpatched Equifax server or exploited a vulnerability not known at this point in time --a so-called Zero-Day-Exploit. If the breach was caused by exploiting CVE-2017-9805, it would have been a Zero-Day-Exploit by that time. The article also states that the CVE-2017-9805 vulnerability exists for nine years now.

We as the Apache Struts PMC want to make clear that the development team puts enormous efforts in securing and hardening the software we produce, and fixing problems whenever they come to our attention. In alignment with the Apache security policies, once we get notified of a possible security issue, we privately work with the reporting entity to reproduce and fix the problem and roll out a new release hardened against the found vulnerability. We then publicly announce the problem description and how to fix it. Even if exploit code is known to us, we try to hold back this information for several weeks to give Struts Framework users as much time as possible to patch their software products before exploits will pop up in the wild. However, since vulnerability detection and exploitation has become a professional business, it is and always will be likely that attacks will occur even before we fully disclose the attack vectors, by reverse engineering the code that fixes the vulnerability in question or by scanning for yet unknown vulnerabilities.

Regarding the assertion that especially CVE-2017-9805 is a nine year old security flaw, one has to understand that there is a huge difference between detecting a flaw after nine years and knowing about a flaw for several years. If the latter was the case, the team would have had a hard time to provide a good answer why they did not fix this earlier. But this was actually not the case here --we were notified just recently on how a certain piece of code can be misused, and we fixed this ASAP. What we saw here is common software engineering business --people write code for achieving a desired function, but may not be aware of undesired side-effects. Once this awareness is reached, we as well as hopefully all other library and framework maintainers put high efforts into removing the side-effects as soon as possible. It's probably fair to say that we met this goal pretty well in case of CVE-2017-9805.

Our general advice to businesses and individuals utilizing Apache Struts as well as any other open or closed source supporting library in their software products and services is as follows:

1. Understand which supporting frameworks and libraries are used in your software products and in which versions. Keep track of security announcements affecting this products and versions.

2. Establish a process to quickly roll out a security fix release of your software product once supporting frameworks or libraries needs to be updated for security reasons. Best is to think in terms of hours or a few days, not weeks or months. Most breaches we become aware of are caused by failure to update software components that are known to be vulnerable for months or even years.

3. Any complex software contains flaws. Don't build your security policy on the assumption that supporting software products are flawless, especially in terms of security vulnerabilities.

4. Establish security layers. It is good software engineering practice to have individually secured layers behind a public-facing presentation layer such as the Apache Struts framework. A breach into the presentation layer should never empower access to significant or even all back-end information resources. 

5. Establish monitoring for unusual access patterns to your public Web resources. Nowadays there are a lot of open source and commercial products available to detect such patterns and give alerts. We recommend such monitoring as good operations practice for business critical Web-based services.

Once followed, these recommendations help to prevent breaches such as unfortunately experienced by Equifax.

For the Apache Struts Project Management Committee,

René Gielen
Vice President, Apache Struts 

[1] https://qz.com/1073221/the-hackers-who-broke-into-equifax-exploited-a-nine-year-old-security-flaw/
[2] https://cwiki.apache.org/confluence/display/WW/S2-052
[3] https://cwiki.apache.org/confluence/display/WW/Version+Notes+2.5.13
[4] https://cwiki.apache.org/confluence/display/WW/Version+Notes+2.3.34
[5] https://baird.bluematrix.com/docs/pdf/dbf801ef-f20e-4d6f-91c1-88e55503ecb0.pdf

Monday May 15, 2017

The Apache Software Foundation Announces Apache® Samza™ v0.13

Open Source Big Data distributed stream processing framework in production at Intuit, LinkedIn, Netflix, Optimizely, Redfin, and Uber, among other organizations.

Forest Hill, MD —15 May 2017— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of Apache® Samza™  v0.13, the latest version of the Open Source Big Data distributed stream processing framework.

An Apache Top-Level Project (TLP) since January 2015, Samza is designed to provide support for fault-tolerant, large scale stream processing. Developers use Apache Samza to write applications that consume streams of data and to help organizations understand and respond to their data in real-time. Apache Samza offers a unified API to process streaming data from pub-sub messaging systems like Apache Kafka and batch data from Apache Hadoop.

"The latest 0.13 release takes Apache Samza's data processing capabilities to the next level with multiple new features," said Yi Pan, Vice President of Apache Samza. "It also improves the simplicity and portability of real-time applications."

Apache Samza powers several real-time data processing needs including realtime analytics on user data, message routing, combating fraud, anomaly detection, performance monitoring, real-time communication, and more. Apache Samza can process up to 1.1 million messages per second on a single machine. v0.13 highlights include:
  • A higher level API that developers can use this to express complex processing pipelines on streams more concisely;
  • Support for running Samza applications as a lightweight embedded library without relying on YARN;
  • Support for flexible deployment options; 
  • Support for rolling upgrade of running Samza applications;
  • Improved monitoring and failure detection using a built-in heart beating mechanism;
  • Enabling better integrations with other cluster-manager frameworks and environments; and
  • Several bug-fixes that improve reliability, stability and robustness of data processing,

Organizations such as Intuit, LinkedIn, Netflix, Optimizely, Redfin, TripAdvisor, and Uber rely on Apache Samza to power complex data architectures that process billions of events each day. A list of user organizations is available at https://cwiki.apache.org/confluence/display/SAMZA/Powered+By

"Apache Samza is a highly performant stream/data processing system that has been battle tested over the years of powering mission critical applications in a wide range of businesses," said Kartik Paramasivam, Head of Streams Infrastructure, and Director of Engineering at LinkedIn. "With this 0.13 release, the power of Samza is no longer limited to YARN based topologies. It can now be used in any hosting environment. In addition, it now has a new higher level API that makes it significantly easier to create arbitrarily complex processing pipelines."

"Apache Samza has been powering near real-time use cases at Uber for the last year and a half," said Chinmay Soman, Staff Software Engineer at Uber. "This ranges from analytical use cases such as understanding business metrics, feature extraction for machine learning as well as some critical applications such as Fraud detection, Surge pricing and Intelligent promotions. Samza has been proven to be robust in production and is currently processing about billions of messages per day, accounting for 100s of TB of data flowing through the system." 

"At Optimizely, we have built the world’s leading experimentation platform, which ingests billions of click-stream events a day from millions of visitors for analysis," said Vignesh Sukumar, Senior Engineering Manager at Optimizely. "Apache Samza has been a great asset to Optimizely's Event ingestion pipeline allowing us to perform large scale, real time stream computing such as aggregations (e.g. session computations) and data enrichment on a multiple billion events/day scale. The programming model, durability and the close integration with Apache Kafka fit our needs perfectly."

"It has been a phenomenal experience engaging with this vibrant international community of users and contributors, and I look forward to our continued growth. It is a great time to be involved in the project and we welcome new contributors to the Samza community," added Pan.

Catch Apache Samza in action at Apache: Big Data, 16-18 May 2017 in Miami, FL http://apachecon.com/ , where the community will be showcasing how Samza simplifies stream processing at scale.

Availability and Oversight
Apache Samza software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Samza, visit http://samza.apache.org/ , https://blogs.apache.org/samza/ , and https://twitter.com/samzastream

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 680 individual Members and 6,000 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, ODPi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, Target, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Hadoop", "Apache Hadoop", "Kafka", "Apache Kafka", "Samza", "Apache Samza", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Wednesday February 08, 2017

The Apache Software Foundation Announces Apache® Ranger™ as a Top-Level Project

Big Data security management framework for the Apache Hadoop ecosystem in use at ING, Protegrity, and Sprint, among other organizations.

Forest Hill, MD —8 February 2017— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® Ranger™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.

The latest addition to the ASF’s more than three dozen projects in Big Data, Apache Ranger is a centralized framework used to define, administer and manage security policies consistently across Apache Hadoop components. Ranger also offers the most comprehensive security coverage, with native support for numerous Apache projects, including Atlas (incubating), HBase, HDFS, Hive, Kafka, Knox, NiFi, Solr, Storm, and YARN. 

"Graduating to a Top-Level Project reflects the maturity and growth of the Ranger Community," said Selvamohan Neethiraj, Vice President of Apache Ranger. "We are pleased to celebrate a great milestone and officially play an integral role in the Apache Big Data ecosystem."

Apache Ranger provides a simple and effective way to set access control policies and audit the data access across the entire Hadoop stack by following industry best practices. One of the key benefits of Ranger is that access control policies can be managed by security administrators from a single place and consistently across hadoop ecosystem. Ranger also enables the community to add new systems for authorization even outside Hadoop ecosystem, with a robust plugin architecture, that can be extended with minimal effort. In addition, Apache Ranger provides many advanced features, such as:
  • Ranger Key Management Service (compatible with Hadoop’s native KMS API to store and manage encryption keys for HDFS Transparent Data Encryption);
  • Dynamic column masking and row filtering;
  • Dynamic policy conditions (such as prohibition of toxic joins);
  • User context enrichers (such as geo-location and time of day mappings); and
  • Classification or tag based policies for Hadoop ecosystem components via integration with Apache Atlas.

"As early adopters of Apache Ranger and having contributed to Apache Ranger, we have come to rely upon Apache Ranger as a key part of our security infrastructure for data," said Ferd Scheepers, Chief Information Architect at ING. "We are therefore pleased to learn that the project has now graduated to a TLP project through the efforts of the Apache community. We believe that Apache Ranger represents the best-in-class Open Source security framework for authorization, encryption management, and auditing across Hadoop ecosystem. We laud the community's efforts in building an extensible and enterprise grade architecture for Apache Ranger, and for innovative features such as tag or classification based security (built in conjunction with Apache Atlas). We congratulate the Apache Ranger community on achieving this significant milestone and are confident Apache Ranger will evolve into the de-facto standard for security stack across the Hadoop ecosystem."

"As heavy users of Apache Ranger in production, we are pleased to see the project become a TLP through validation across community efforts," said Timothy R. Connor, Big Data & Advanced Analytics Manager at Sprint. "Apache Ranger has built a next generation ABAC model for authorization along with a robust data-centric Open Source security framework supporting advanced security capabilities such as dynamic row filtering and column masking. All of these point to Apache Ranger maturing into a robust and comprehensive security product for authorization, encryption management and auditing through the Apache community."

"It's great to see Apache Ranger become a TLP," said Dominic Sartorio, Senior Vice President of Products & Development at Protegrity. "Apache Ranger's comprehensive auditing and broad authorization coverage across the Hadoop ecosystem, along with its highly scalable and extensible architecture and rich set of APIs, integrates very well with Protegrity's fine grained data protection capabilities. Our continued collaboration with the Apache Ranger community will help meet the data security requirements of the next generation of enterprise-grade production Hadoop deployments."

"As organizations entrust their enterprise data to Open Source data platforms such as Apache Hadoop, there is a critical need to use the most innovative techniques to safeguard this data," said Alan Gates, Co-Founder of HortonWorks and Apache Ranger incubation mentor. "Apache Ranger community has taken the original, proprietary code base and used it to build a new and successful Apache project that employs an attribute-based approach to define and enforce authorization policies. This modern approach is a combination of subject, action, resource, and environment and goes beyond role-based access control techniques exclusively based on organizational roles - permissions mapping. It has been a pleasure to be their mentor in this process and help them learn the Apache way."

"More and more users are adopting Apache Ranger to secure data in the Hadoop ecosystem," added Neethiraj. "We look forward to welcoming new Ranger users to our mailing lists and community events."

Availability and Oversight
Apache Ranger software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For project updates, downloads, documentation, and ways to become involved with Apache Ranger, visit https://ranger.apache.org/ and @ApacheRanger.

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 620 individual Members and 5,900 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cash Store, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, OPDi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, Target, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "Ranger", "Apache Ranger", "HBase", "Apache HBase", "HDFS", "Apache HDFS", "Hive", "Apache Hive", "Kafka", "Apache Kafka", "Knox", "Apache Knox", "NiFi", "Apache NiFi", "Solr", "Apache Solr", "Storm", "Apache Storm", "YARN", "Apache YARN", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #


Wednesday June 29, 2016

The Apache Software Foundation Announces Apache® OODT™ v1.0

Open Source Big Data middleware metadata framework in use at Children's Hospital Los Angeles Virtual Pediatric Intensive Care Unit, DARPA MEMEX and XDATA, NASA Jet Propulsion Laboratory, and the National Cancer Institute, among others.

Forest Hill, MD —29 June 2016— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of Apache® OODT™ v1.0, the Big Data middleware metadata framework.

OODT is a grid middleware framework for science data processing, information integration, and retrieval. As "middleware for metadata" (and vice versa), OODT is used for computer processing workflow, hardware and file management, information integration, and linking databases. The OODT architecture allows distributed computing and data resources to be searchable and utilized by any end user.

"Apache OODT 1.0 is a great milestone in this project," said Tom Barber, Vice President of Apache OODT. "Effectively managing data pools has historically been problematic for some users, and OODT addresses a number of the issues faced. v1.0 allows us to prepare for some big changes within the platform with new UI designs for user-facing apps and data flow processing under the hood. It's an exciting time in the data management sector and we believe Apache OODT can be at the forefront of it."

OODT 1.0 signals a stage in the project where the initial scope of the platform is feature- complete and ready for general consumption. v1.0 features include:
  • Data ingestion and processing;
  • Automatic data discovery and metadata extraction;
  • Metadata management;
  • Workflow processing and support; and
  • Resource management

Originally created at NASA Jet Propulsion Laboratory in 1998 as a way to build a national framework for data sharing, OODT has been instrumental to the National Cancer Institute's Early Detection Research Network for managing distributed scientific data sets across 20+ institutions nationwide for more than a decade.

Apache OODT is in use in many scientific data system projects in Earth science, planetary science, and astronomy at NASA, such as the Lunar Mapping and Modeling Project (LMMP), NPOESS Preparatory Project (NPP) Sounder PEATE Testbed, the Orbiting Carbon Observatory-2 (OCO-2) project, and the Soil Moisture Active Passive mission testbed. In addition, OODT is used for large-scale data management and data preparation tasks in the DARPA MEMEX and XDATA efforts, and for supporting research and data analysis within the pediatric intensive care domain in collaboration with Children's Hospital Los Angeles (CHLA) and its Laura P. and Leland K. Whittier Virtual Pediatric Intensive Care Unit (VPICU), among many other applications.

"To watch Apache OODT grow from an internal NASA project to 1.0 where it is today and dozens of releases is an amazing feat. I truly believe having it at the ASF has allowed it to grow and prosper. We are doubling down on our commitment to Apache OODT, investing in its enhancement and use in several national-scale projects," said Chris Mattmann, member of the Apache OODT Project Management Committee, and Chief Architect, Instrument and Science Data Systems Section at NASA JPL. "Apache OODT processes some of the world's biggest data sets, distributes and manages them, and makes sure science happens in a timely and accurate fashion."

OODT entered the Apache Incubator in January 2010, and graduated as a Top-level Project in November 2010. 

Catch Apache OODT in action at ApacheCon Europe, 14-18 November 2016 in Seville, Spain http://apachecon.com/ .

Availability and Oversight
Apache OODT software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache OODT, visit http://oodt.apache.org/ and https://twitter.com/apache_oodt

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 550 individual Members and 5,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, OPDi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "OODT", "Apache OODT", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Monday May 23, 2016

The Apache Software Foundation Announces Apache® TinkerPop™ as a Top-Level Project

Powerful Open Source Big Data graph computing framework in use at Amazon, DataStax, and IBM, among others.

Forest Hill, MD –23 May 2016– The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® TinkerPop™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.

Apache TinkerPop is a graph computing framework that provides developers the tools required to build modern graph applications in any application domain and at any scale.

"Graph databases and mainstream interest in graph applications have seen tremendous growth in recent years," said Stephen Mallette, Vice President of Apache TinkerPop. "Since its inception in 2009, TinkerPop has been helping to promote that growth with its Open Source graph technology stack. We are excited to now do this same work as a top-level project within the Apache Software Foundation."

As a graph computing framework for both real-time, transactional graph databases (OLTP) and and batch analytic graph processors (OLAP), TinkerPop is useful for working with small graphs that fit within the confines of a single machine, as well as massive graphs that can only exist partitioned and distributed across a multi-machine compute cluster.

TinkerPop unifies these highly varied graph system models, giving developers less to learn, faster time to development, and less risk associated with both scaling their system and avoiding vendor lock-in.

The Power to Process One Trillion Edges
The central component to Apache TinkerPop is Gremlin, a graph traversal machine and language, which makes it possible to write complex queries (called traversals) that can execute either as real-time OLTP queries, analytic OLAP queries, or a hybrid of the two.

Because the Gremlin language is separate from the Gremlin machine, TinkerPop serves as a foundation for any query language to work against any TinkerPop-enabled system. Much like the Java virtual machine is host to Java, Groovy, Scala, Clojure, and the like, the Gremlin traversal machine is already host to Gremlin, SPARQL, SQL, and various host language embeddings in Python, JavaScript, etc. Once a language is compiled to a Gremlin traversal, the Gremlin machine can evaluate it against a graph database or processor. Instantly, languages such as SPARQL can execute across a one thousand node cluster for long running analytic jobs touching large parts of the graph or sub-second queries within a small neighborhood.

Apache TinkerPop is in use at organizations such as DataStax and IBM, among many others. Amazon.com is currently using TinkerPop and Gremlin to process its order fullfillment graph which contains approximately one trillion edges.

The core Apache TinkerPop release provides production-ready, reference implementations of a number of different data systems including Neo4j (OLTP), Apache Giraph (OLAP), Apache Spark (OLAP), and Apache Hadoop (OLAP). However, the bulk of the implementations are maintained within the larger TinkerPop ecosystem. These implementations include commercial and Open Source graph databases and processors, Gremlin language variants for various programming languages on and off the Java Virtual Machine, visualization applications for graph analysis and many other tools and libraries. The TinkerPop ecosystem is richly supported with many options for developers to choose from.

TinkerPop originated in 2009 at the Los Alamos National Laboratory. After two major releases (TinkerPop1 in 2011 and TinkerPop2 in 2012), the project was submitted to the Apache Incubator in January 2015.

"Following in a long line of Apache projects that revolutionized entire industries, starting with with the Apache HTTP Server, continuing with Web Services, search, and Big Data technologies, Apache TinkerPop will no doubt reshape the Graph Computing landscape," said Hadrian Zbarcea, co-Vice President of ASF Fundraising and Incubator Mentor of Apache TinkerPop. "While TinkerPop has just graduated as an ASF Top Level Project, it is already seven years old, a mature technology, backed by a number of vendors, a vibrant community, and absolutely brilliant developers."

The project welcomes those interested in contributing to Apache TinkerPop. For more information, visit http://tinkerpop.apache.org/docs/3.2.0-incubating/dev/developer/#_contributing

Availability and Oversight
Apache TinkerPop software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache TinkerPop, visit http://tinkerpop.apache.org/ and https://twitter.com/apachetinkerpop

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 550 individual Members and 5,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, OPDi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF

© The Apache Software Foundation. "Apache", "TinkerPop", "Apache TinkerPop", "Apache HTTP Server", "Giraph", "Apache Giraph", "Hadoop", "Apache Hadoop", "Spark", "Apache Spark" and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Tuesday March 08, 2016

The Apache® Software Foundation announces Apache Flink™ v1.0

Advanced distributed stream processing framework performs 50x faster than other real-time computation systems

Forest Hill, MD —8 March 2016— The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of Apache® Flink™ v1.0, the advanced Open Source distributed real-time stream processing system.

Apache Flink is an easy to use, yet sophisticated Open Source stream processor, with recent test results clocking in at least 50x faster than other distributed real-time computation systems.

"Releasing Flink 1.0 is the most important milestone in the project since graduation to a top-level Apache project one year ago," said Stephan Ewen, Vice President of Apache Flink and co-founder/CTO of data Artisans. "This is a collective achievement of more than 150 individuals that have contributed code to date."

Under The Hood
Flink uniquely supports a combination of features that include flexible windowing on event time, out-of-order stream handling, high availability, and exactly-once guarantees, together with high event throughput and low processing latency.

V.1.0 furthers Apache Flink's maturity, making it significantly easier to program, deploy, and maintain Flink pipelines at scale by:
  • initiating backwards compatibility of public APIs against all 1.x.y versions;
  • introducing functionality for complex event processing (CEP);
  • supporting large state beyond memory limits;
  • supporting state versioning and savepoints; and 
  • improving the system's monitoring functionality

"Flink v1.0 is indeed a testament to the maturity of the platform, which now enjoys production use at Fortune Global 500, as well as leading tech companies," said Kostas Tzoumas, member of the Apache Flink Project Management Committee, and co-founder/CEO of data Artisans.

"Google congratulates the Apache Flink community for this achievement," said William Vambenepe, Lead Product Manager for Big Data on Google Cloud Platform. "Flink is unlocking the richness of stream processing at scale, and delivering on the promise of the Dataflow Programming Model for all users, anywhere. We look forward to continuing to work with the Flink community, including further unification of APIs as part of Apache Beam (incubating)."

"At King.com we are using Flink to process more than 30 billion events daily, leveraging Flink's stateful streaming abstractions," said Christofer Waldenström, Team Lead for Streaming Platform at King.com. "We find that Flink provides a convenient way to interact with real-time data for complex streaming use-cases involving large state beyond memory."

"Apache Flink proved to be a valuable framework in our day-to-day business. It helps us to process log events, aggregate tracking information, apply filters and decide upon message routing," said Christian Kreutzfeldt, Senior Solution Developer & Architect at Otto Group BI. "We are still excited to see how fast new applications can be implemented and deployed. Even complex requirements do not constitute a significant challenge. For the upcoming version 1.0 we are looking forward to see a stabilized API and the advanced monitoring features. Especially the back pressure monitoring could become a great tool to understand internal processing behavior much better. Furthermore from an enterprise user perspective we are happy to see that Apache Flink finally reached version 1.0 which typically opens the door to the broader enterprise market."

Flink originated at the Stratosphere research project that started in 2009 by the Technical University of Berlin, along with several other European universities. The project was submitted to the Apache Incubator in April 2014 and became an Apache Top-Level Project in December 2014. 

Today, Flink among the ASF's dynamic Big Data projects, with more than 150 contributors to date, a wealth of production deployments, and commercial support by data Artisans, a company founded by the core team that originally developed Flink.

"The two things that have always struck me about Flink has been the excellence of the code and the excellence of the team," said Ted Dunning, Vice President of the Apache Incubator and Chief Application Architect at MapR. "This pattern is continuing with this release."

Get Involved!
Apache Flink welcomes contribution and community participation through mailing lists as well as attending face-to-face MeetUps, developer trainings, and the following events:
  • QCon (London, 7-9 March 2016)
  • Strata/Hadoop World (San Jose, 28-31 March 2016)
  • Hadoop Summit (Dublin, 13-14 April 2016)
  • Kafka Summit (San Francisco, 26 April 2016)
  • Apache: Big Data (Vancouver, 9-12 May 2016)
  • OSCON (Austin, TX, 18-19 May 2016)
  • Strata/Hadoop World (London, 31 May - 3 June 2016)
  • Berlin Buzzwords (Berlin, 5-7 June 2016)
  • Flink Forward (Berlin, 12-14 September 2016)

Availability and Oversight
Apache Flink software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Flink, visit http://flink.apache.org/ and https://twitter.com/ApacheFlink

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 550 individual Members and 5,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ or follow @TheASF on Twitter.

© The Apache Software Foundation. "Apache", "Apache Beam (incubating)", "Beam (incubating)", "Apache Cassandra", "Cassandra", "Apache Flink", "Flink", "Apache Hadoop", "Hadoop", "Apache HBase", "HBase", "Apache Kafka", "Kafka", "Apache MapReduce", "MapReduce", "Apache Storm", "Storm", "Apache YARN", "YARN", "ApacheCon", and their logos are registered trademarks or trademarks of The Apache Software Foundation in the U.S. and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Monday November 23, 2015

The Apache Software Foundation Announces Apache™ Brooklyn™ as a Top-Level Project

Open Source framework for modelling, deploying, monitoring and managing applications in use at Canopy, IBM, SWIFT, and Virtustream, among others.

Forest Hill, MD –23 November 2015– The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache™ Brooklyn™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.

Apache Brooklyn is an application blueprint and management platform used for integrating services across multiple data centers as well as and a wide range of software in the Cloud.

"We're very proud of the work that our community has done to bring us to graduation," said Richard Downer, Vice President of Apache Brooklyn. "Our time in the Apache Incubator has given us the opportunity to grow the project, both its community and its code. Users of Brooklyn can now be confident that this is a project that is going to be around for a long time to come."

With modern applications being composed of many components, and increasing interest in micro-services architecture, the deployment and ongoing evolution of deployed apps is an increasingly difficult problem. Apache Brooklyn’s blueprints provide a clear, concise way to model an application, its components and their configuration, and the relationships between components, before deploying to public Cloud or private infrastructure. Policy-based management, built on the foundation of autonomic computing theory, continually evaluates the running application and makes modifications to it to keep it healthy and optimize for metrics such as cost and responsiveness.

Cloud service providers Canopy and Virtustream both recognize the value of having an application-centered view of services and have created product offerings built on Apache Brooklyn. IBM has also made extensive use of Apache Brooklyn in order to migrate large workloads from AWS to IBM Softlayer.

Apache Brooklyn is in use at SWIFT (Society for Worldwide Interbank Financial Telecommunication), creators of the industry syntax standard for financial messages. "Apache Brooklyn fills a gap in orchestration of service delivery," said Otmane Benali, Manager of Messaging Integration at SWIFT. "Its use of the CAMP standard provides operations a single window to managing heterogeneous platforms, very common in large enterprises."

Brooklyn was created by ASF sponsor Cloudsoft Corporation in 2011, and was submitted to the Apache Incubator in May 2014. The project recently released version 0.8.0, and is continuing to evolve fast, with the aim of making a stable, well-featured 1.0 release in the first half of 2016.

"Congratulations to Brooklyn for becoming an Apache Top Level Project," said Hadrian Zbarcea, Apache Brooklyn Incubator Mentor, ASF Member, and President of Apifocal. "As a standards based, modular, extensible framework for modeling, monitoring and managing Cloud applications through autonomic blueprints, Brooklyn offers a new paradigm for Cloud platforms deployment and has the potential to create new markets --similar to what virtualization meant for the Cloud computing space."

In addition, Brooklyn has relationships to several other Apache projects. "We are big consumers of Apache jclouds, and contributors to it, so that we get strong cross-Cloud portability," added Downer. "This made the Apache Software Foundation a natural home for Brooklyn. In addition, the Brooklyn community offers off-the-shelf blueprints for many well-known Apache projects, from Cassandra and Qpid to Mesos and Hadoop."

Catch Apache Brooklyn in action at Cloud Foundry Summit Asia in Shanghai on 3 December 2015 http://cfasia2015.sched.org/event/4jwB

Availability and Oversight
Apache Brooklyn software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Brooklyn, visit http://brooklyn.apache.org/ and https://twitter.com/ApacheBrooklyn

About the Apache Incubator
The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 550 individual Members and 4,700 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Bloomberg, Budget Direct, Cerner, Citrix, Cloudera, Comcast, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Matt Mullenweg, Microsoft, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ or follow @TheASF on Twitter.

© The Apache Software Foundation. "Apache", "Brooklyn", "Apache Brooklyn", "Cassandra", "Hadoop", "jclouds", "Mesos", "Qpid", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.

# # #

Tuesday January 27, 2015

The Apache Software Foundation Announces Apache™ Samza™ as a Top-Level Project

Open Source Big Data distributed stream processing framework used in business intelligence, financial services, healthcare, mobile applications, security, and software development, among other industries.

Forest Hill, MD –27 January 2015– The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache™ Samza™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles.

"The incubation process at Apache has been great. It has helped us cultivate a strong community, and provided us with the support and infrastructure to make Samza grow," said Chris Riccomini, Vice President of Apache Samza.

Apache Samza is a distributed stream processing framework, designed to handle fault tolerance, stateful processing, message durability, and scalability. Samza helps users to write light-weight processors that consume streams of data from messaging systems such as Apache Kafka. These processors empower organizations to understand and react to their data in real-time. In addition, Samza uses Apache Hadoop YARN to provide fault tolerance, processor isolation, security, and resource management.

Samza represents a different approach to stream processing. It has been purpose-built first and foremost as a production-grade system with operability and scalability in mind. Samza integrates tightly with Apache Kafka, which makes it a natural fit to those already running Kafka in their data pipeline. The framework also introduces the concept of stateful processing and aggregation as a first-class feature. Stateful processing gives Samza developers a completely new paradigm for aggregating stream data. These features help organizations do high performance stream processing at scale.

Created to process tracking data, service log data, and for data ingestion pipelines for realtime services, Samza originated at LinkedIn, and was submitted to the Apache Incubator in July 2013. 

"LinkedIn is thrilled to see Apache Samza experience such strong adoption and now graduate to a Top-Level Project. Samza was developed to help solve some of LinkedIn's  toughest stream processing challenges and has become a central piece of our infrastructure," said Kevin Scott, Senior Vice President of Engineering and Operations at LinkedIn.

Apache Samza is used in an array of industries, applications, and organizations, including:
  • DoubleDutch, developers of mobile apps for events and conferences, uses Samza to power their analytics platform and stream data live into an event dashboard for real-time insights;
  • Forstcales' Big Data security analytics solutions use Samza to processes security events log as part of the data ingestion pipelines and on-line machine learning models creation process;
  • Happy Pancake, Northern Europe's largest internet dating service, uses Samza for all event handlers and data replication;
  • Advertising technology provider Improve Digital uses Samza as the foundation of a realtime processing capability performing data analytics and as the basis for an alerting system;
  • Jack Henry & Associates uses Samza to process user activity data across its Banno suite of products for financial institutions;
  • MobileAware uses Samza as a foundation for two mobile network products: real time analytics and multi channel notification (push, text message and HTML5);
  • Technology startup Project Florida uses Samza for real-time monitoring of data streams from wearable sensors, for preventative healthcare purposes;
  • Quantiply, providers of Cloud-based micro-applications, uses Samza to bring together user event, system performance, and business operational data for real-time visibility and decision support; and
  • Social media business intelligence solution VinTank uses Samza to power their analysis and natural language processing (NLP) pipeline.


"We've had great experiences with Samza at Improve Digital where it has enabled us to  build out our streaming data platform," said Garry Turkington, CTO of Improve Digital. "It's fantastic to see it graduate to a top-level project."

Jay Kreps, CEO of Confluent, said "Samza is a fantastic piece of infrastructure, and a great complement to Apache Kafka. We at Confluent are really excited to see it added as a top-level Apache project."

"Fortscale has been using Apache Samza successfully to build online machine learning algorithms and detect insider threats," said Dotan Patrich, Software Architect at Fortscale. "It's been a great experience building large scale streaming solution and using Samza's and enjoying it's unique state management architecture. It's fantastic to see it graduate to a Top-Level Project."

"I've been involved in Apache Samza's community since its inception. It's been thrilling to watch the community grow, and I'm very proud and excited to see that the project is graduating. Samza has a bright future, and I'm looking forward to what's to come," added Riccomini.

Availability and Oversight
As with all Apache products, Apache Samza software is released under the Apache License v2.0, and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For documentation and ways to become involved with Apache Samza, visit http://samza.apache.org/ and @SamzaStream on Twitter

About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 500 individual Members and 4,500 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Budget Direct, Cerner, Citrix, Cloudera, Comcast, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, Matt Mullenweg, Microsoft, Pivotal, Produban, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ or follow https://twitter.com/TheASF.

© The Apache Software Foundation. "Apache", "Apache Samza", "Samza", "Apache Hadoop", "Hadoop", "Hadoop YARN", "Apache Kafka", "Kafka", "ApacheCon", and the Apache Samza logo are trademarks of The Apache Software Foundation. All other brands and trademarks are the property of their respective owners.

# # #

Calendar

Search

Hot Blogs (today's hits)

Tag Cloud

Categories

Feeds

Links

Navigation