Apache Infrastructure Team

Thursday Jan 27, 2011

Controlling your SpamAssassin threshold

Committers,

The Infrastructure Team has just enabled a new feature to control your SpamAssassin Threshold for your apache.org account. The default score for user delivery has always remained at 10, but with this new feature you can lower that score to anything you want. Many people with older accounts will probably prefer a lower score, like 5, which is the default for all apache mailing lists.

To lower your score login to id.apache.org and change your 'SpamAssassin Threshold (asf-sascore)' attribute to your desired level. Don't forget to supply the form with your LDAP password.

Enjoy.

Friday Jan 14, 2011

https://id.apache.org -- New Password Service

Folks,

The infrastructure team are pleased to announce the availability of id.apache.org the new password management tool for all ASF committers and members. This new service will allow users to:

  1. Reset forgotten LDAP passwords themselves, no need to contact the Infra team anymore.
  2. The ability to change their LDAP password.
  3. The ability to update your LDAP record, i.e. change forename, surname or mail attributes. [1].
Users should note that this service will only allow you to manage your LDAP password, thus controlling access to those resources currently protected by LDAP authnz.

Once logged in you will note that some fields are not editable, this is by design and are there merely to show you your LDAP entry. You are currently only allowed to edit your Surname, Given name (Forename), and Mail attributes. This list may be extended as we make more features available, and they will be announced as and when.

Users of this service should note that we have a few small bugs to iron out, and this will be done as soon as possible. For example if you attempt to modify your details and do no re-enter your password you will currently see a generic HTTP 500 error.

Thanks must go to Ian Boston (ieb), and Daniel Shahaf (danielsh) for making this work. Ian provided the initial code (his first ever attempt at Python too). Daniel then took it and implemented several changes and generally improved the backend.

[1] - It should be noted that updating your mail record in LDAP will not currently have any affect on where your apache.org email is forwarded on too. This is planned to take place later this year.

Friday Dec 17, 2010

LDAP and password policy

As of approximately 03:00 (UTC) today the infrastructure team have enabled a password policy for all LDAP accounts.
This policy has been implemented at the LDAP infrastructure level and will affect all users. It has been deployed using OpenLDAP's password policy schema, and overlay.

At the time of launch we will be enforcing the following policy.

  • At the time of a given users 10th successive login failure the account will be locked.
  • The account will then be automatically unlocked 24 hours later, or until a member of root@ unlocks it for you.
  • If the user successfully completes a login before the tally reaches 10, the counter for failed logins is reset back to 0.

We are enabling this to try and prevent any brute force attempt at guessing passwords. It will also highlight potential issues with accounts.

As with all account related queries, you should be contacting root@ - We will be able to unlock your account for you, allowing you to gain access.

Thursday Dec 02, 2010

The ASF CMS

Over the past 3 months, the Infrastructure Team has developed and deployed a custom CMS for Apache projects to use to manage their websites. There is a document available which explains the rationale, role, and future plans for the CMS. We have opened up the ACLs for the www.apache.org site for all committers to now be able to edit content on the site using the cms (while still restricting live publication to the Apache membership and the Infrastructure Team).

The basic workflow for committers is easy to describe: first install the javascript bookmarklet on your browser toolbar. Next visit a webpage on the www.apache.org website. When you've located a page you'd like to edit, click on the installed bookmarklet: you'll be taken to a working copy of the markdown source for the page in question. To edit the content click on the [Edit] link. A markdown editor will show you a preview of your changes while you work. When you have finished, submit your changes and [Commit] them.

Your commit will trigger buildbot to build a staging version of your changes. You can follow the build while it is ongoing, and once it has completed you can click on the [Staged] link to see the results. Members and Infrastructure Team members can continue on and publish those changes once they are satisfied with them. Other committers may need to send a note to the site-dev@ mailing list to request publication of their changes.

The publication links in the CMS are essentially merge + commit operations in subversion which are tied into the live site via svnpubsub. That means publishing in the CMS is virtually instantaneous.

The CMS is now open to all top-level and incubating projects. Interested projects should contact the infrastructure@ mailing list or simply file an INFRA ticket against the CMS component. Early adopters are encouraged to collaborate on the wiki page for working out usage and adoption issues.

Tuesday Oct 26, 2010

ReviewBoard instance running at the ASF

We know we have projects that use reviewboard externally to the ASF, we also have some projects using codereview.appspot.com and we also have some projects using Fisheye/Clover externally. Well, due to popular request, we now have an internal ReviewBoard running on https://reviews.apache.org !! So, sign up for an account, request that your projects repository be added (file an INFRA issue) and get collaborating! Questions or comments please raise them on the infrastructure-dev list as reviews.apache.org is in early stages it may need tweaking.

Thursday Sep 23, 2010

1 million commits and still going strong.

Yesterday, the main ASF SVN code repository passed the 1 million commit mark. Shortly thereafter one of the ASF members enquired as to how he could best grab the svn log entries for all of these commits. As always there were a bunch of useful replies, but they were all set to take quite some time; mainly because if anyone just simply runs

svn log http://svn.apache.org/repos/asf -r1:1000000 

It will not only take several hours, it will also cause high levels of load on one of the two geo-balanced SVN servers. Also, requesting that many log entries will likely result in your IP address being banned.

So I decided to create the log set locally on one of the SVN servers. This is now available for download [http://s.apache.org/1m-svnlog] [md5]
This is a 50Mb tar/gz file. It will uncompress to about 240Mb. The log 'only' contains the log entries from 1 -> 1,000,000 - if you want the rest you can run:

svn log http://svn.apache.org/repos/asf -r1000001:HEAD

This will give you all the log entries from 1M+1 to current

Monday Jul 19, 2010

new hardware for apache.org

This weekend we rolled out a new server, a Dell Power Edge R410, named Eos, to host the Apache.org websites and MoinMoin wiki:

  • OS: FreeBSD 8.1-RC2
  • CPU: 2x Intel(R) Xeon(R) CPU X5550 @ 2.67GHz (2 package(s) x 4 core(s) x 2 SMT threads = 16 CPUs)
  • RAM: 48gb DDR3
  • Storage: 12x 15k RPM 300gb SAS, 2x 80gb SSD, configured in a ZFS raidz2 with the SSDs used for the L2ARC

This new hardware replaces an older Sun T2000, also called eos, as the primary webserver for apache.org. We hope everyone enjoys the increased performance, especially from the Wiki!

On the less visible infrastructure side, we are also upgrading Athena, one of our frontend mail servers. The new Athena is a DPE r210 with a 4 core 2.67GHz processor, replacing a Sun X2200.

Friday Jun 11, 2010

s.apache.org - uri shortening service

Today we've brought s.apache.org online. It's a url shortening service that's limited to Apache committers- the people who write all that Apache software! One of the main reasons we're providing this service is to allow committers to use shortened links whose provenance is known to be a trusted source, which is a big improvement over the generic shorteners out there in the wild. It is also meant to provide permanent links suitable for inclusion in board reports, or more generally email sent to our mailing lists - which will be archived, either publically or privately, for as long as Apache is around.

The service is easy to use, and being from Apache the source code for the service is readily available. The primary author of the code is Ulrich Stärk (uli). Some of the more interesting features you can pick up from the source is the ability to "display" a link before doing a redirect by tacking on "?action=display" to any shortened url. For the truly paranoid there is the "?action=display;cookie=1" query string to force all shortened urls to display by default before redirecting. That feature may be turned off again with the "?action=display;cookie=" query string. Again, look over the source code for other interesting features you may wish to take advantage of.

Committers: here's some javascript you might consider placing in a bookmark, courtesy of Doug Cutting. To use create a new bookmark and set the link url to

javascript:void(location.href='https://s.apache.org/?action=create&search=ON&uri='+escape(location.href))

Tuesday Apr 13, 2010

apache.org incident report for 04/09/2010

Apache.org services recently suffered a direct, targeted attack against our infrastructure, specifically the server hosting our issue-tracking software.

The Apache Software Foundation uses a donated instance of Atlassian JIRA as an issue tracker for our projects. Among other projects, the ASF Infrastructure Team uses it to track issues and requests. Our JIRA instance was hosted on brutus.apache.org, a machine running Ubuntu Linux 8.04 LTS.

Password Security

If you are a user of the Apache hosted JIRA, Bugzilla, or Confluence, a hashed copy of your password has been compromised.

JIRA and Confluence both use a SHA-512 hash, but without a random salt. We believe the risk to simple passwords based on dictionary words is quite high, and most users should rotate their passwords.

Bugzilla uses a SHA-256, including a random salt. The risk for most users is low to moderate, since pre-built password dictionaries are not effective, but we recommend users should still remove these passwords from use.

In addition, if you logged into the Apache JIRA instance between April 6th and April 9th, you should consider the password as compromised, because the attackers changed the login form to log them.

What Happened?

On April 5th, the attackers via a compromised Slicehost server opened a new issue, INFRA-2591. This issue contained the following text:

ive got this error while browsing some projects in jira http://tinyurl.com/XXXXXXXXX [obscured]

Tinyurl is a URL redirection and shortening tool. This specific URL redirected back to the Apache instance of JIRA, at a special URL containing a cross site scripting (XSS) attack. The attack was crafted to steal the session cookie from the user logged-in to JIRA. When this issue was opened against the Infrastructure team, several of our administators clicked on the link. This compromised their sessions, including their JIRA administrator rights.

At the same time as the XSS attack, the attackers started a brute force attack against the JIRA login.jsp, attempting hundreds of thousands of password combinations.

On April 6th, one of these methods was successful. Having gained administrator privileges on a JIRA account, the attackers used this account to disable notifications for a project, and to change the path used to upload attachments. The path they chose was configured to run JSP files, and was writable by the JIRA user. They then created several new issues and uploaded attachments to them. One of these attachments was a JSP file that was used to browse and copy the filesystem. The attackers used this access to create copies of many users' home directories and various files. They also uploaded other JSP files that gave them backdoor access to the system using the account that JIRA runs under.

By the morning of April 9th, the attackers had installed a JAR file that would collect all passwords on login and save them. They then sent password reset mails from JIRA to members of the Apache Infrastructure team. These team members, thinking that JIRA had encountered an innocent bug, logged in using the temporary password sent in the mail, then changed the passwords on their accounts back to their usual passwords.

One of these passwords happened to be the same as the password to a local user account on brutus.apache.org, and this local user account had full sudo access. The attackers were thereby able to login to brutus.apache.org, and gain full root access to the machine. This machine hosted the Apache installs of JIRA, Confluence, and Bugzilla.

Once they had root on brutus.apache.org, the attackers found that several users had cached Subversion authentication credentials, and used these passwords to log in to minotaur.apache.org (aka people.apache.org), our main shell server. On minotaur, they were unable to escalate privileges with the compromised accounts.

About 6 hours after they started resetting passwords, we noticed the attackers and began shutting down services. We notified Atlassian of the previously unreported XSS attack in JIRA and contacted SliceHost. Atlassian was responsive. Unfortunately, SliceHost did nothing and 2 days later, the very same virtual host (slice) attacked Atlassian directly.

We started moving services to a different machine, thor.apache.org. The attackers had root access on brutus.apache.org for several hours, and we could no longer trust the operating system on the original machine.

By April 10th, JIRA and Bugzilla were back online.

On April 13th, Atlassian provided a patch for JIRA to prevent the XSS attack. See JRA-20994 and JRA-20995 for details.

Our Confluence wiki remains offline at this time. We are working to restore it.

What worked?

  • Limited use passwords, especially one-time passwords, were a real lifesaver. If JIRA passwords had been shared with other services/hosts, the attackers could have caused widespread damage to the ASF's infrastructure. Fortunately, in this case, the damage was limited to rooting a single host.
  • Service isolation worked with mixed results. The attackers must be presumed to have copies of our Confluence and Bugzilla databases, as well as our JIRA database, at this point. These databases include hashes of all passwords used on those systems. However, other services and hosts, including LDAP, were largely unaffected.

What didn't work?

  • The primary problem with our JIRA install is that the JIRA daemon runs as the user who installed JIRA. In this case, it runs as a jira role-account. There are historical reasons for this decision, but with 20/20 hindsight, and in light of the security issues at stake, we expect to revisit the decision!
  • The same password should not have been used for a JIRA account as was used for sudo access on the host machine.
  • Inconsistent application of one time passwords; We required them on other machines, but not on brutus. PAM was configured to allow optional use of OPIE, but not all of our sudoers had switched to it.
  • SSH passwords should not have been enabled for login over the Internet. Although the Infrastructure Team had attempted to configure the sshd daemon to disable password-based logins, having UsePAM yes set meant that password-based logins were still possible.
  • We use Fail2Ban for many services, but we did not have it configured to track JIRA login failures.

What are we changing?

  • We have remedied the JIRA installation issues with our reinstall. JIRA is now installed by root and runs as a separate daemon with limited privileges.
  • For the time being we are running JIRA in a httpd-tomcat proxy config with the following rules:
    
       ProxyPass /jira/secure/popups/colorpicker.jsp !
       ProxyPass /jira/secure/popups/grouppicker.jsp !
       ProxyPass /jira/secure/popups/userpicker.jsp !
       ProxyPass /jira        http://127.0.0.1:18080/jira
    
    
    Sysadmins may find this useful to secure their JIRA installation until an upgrade is feasible.
  • We will be making one-time-passwords mandatory for all super-users, on all of our Linux and FreeBSD hosts.
  • We have disabled caching of svn passwords, and removed all currently cached svn passwords across all hosts ast the ASF via the global config /etc/subversion/config file:
    
    [auth]
    store-passwords = no
    
    
  • Use Fail2Ban to protect web application login failures from brute force attacks

We hope our disclosure has been as open as possible and true to the ASF spirit. Hopefully others can learn from our mistakes.

Monday Mar 29, 2010

ASF Buildbot svn setup

Here at the ASF we have a subversion setup with all our projects code in one repository, with each of those projects having their own style of trunk/branches/tags/site etc.. This works well for us, but did present us with some initial problems when setting up our Buildbot instance to work with it.[Read More]

Thursday Mar 04, 2010

New slaves for ASF Buildbot

The ASF Buildbot CI instance has just launched two more slaves, expanding the range of platforms it can build and test on[Read More]

Monday Feb 22, 2010

The ASF LDAP system

When we decided some time ago to start using LDAP for auth{n,z} we had to come up with a sane structure, this is what we have thus far. 

 dc=apache,dc=org
      | ---  ou=people,dc=apache,dc=org
      | ---  ou=groups,dc=apache,dc=org
           | ---  ou=people,ou=groups,dc=apache,dc=org
           | ---  ou=committees,ou=groups,dc=apache,dc=org

 As well as other OUs that contain infrastructure related objects.

So with "dc=apache,dc=org" being our basedn, we decided we needed to keep the structure as simple as possible and placed the following objects in the respective OUs:

  • User accounts -  "ou=groups,dc=apache,dc=org"
  • POSIX groups - "ou=groups,dc=apache,dc=org"
  • User Groups  - "ou=people,ou=groups,dc=apache,dc=org"
  • PMC/Committee groups - "ou=committees,ou=groups,dc=apache,dc=org"
Access to the LDAP infrastructure is connection limited to hosts within our co-location sites.  This is essentially to help prevent unauthorised data leaving our network. 

LDAP, groups and SVN - Coupled together

The infrastructure team have now completed the next stage of the planned LDAP migration.
We have migrated our old SVN authorisation file, and POSIX groups into LDAP data.  SVN access control is now managed using these groups.

This means to change access the Subversion repositories is now as simple as changing group membership. We use some custom perl scripts that build the equivalent authorisation file meaning that we dont need to use the <location> blocks nasty hack to do this.  It also means that all changes, including adding new groups and extending access control is made simple.

ASF PMC chairs, are now able to make changes to their POSIX, and SVN groups whilst logged into people.apache.org - using a selection of scripts:

  • /usr/local/bin/list_unix_groups.pl
  • /usr/local/bin/list_committees.pl
  • /usr/local/bin/modify_unix_groups.pl
  • /usr/local/bin/modify_committees.pl

All of these scripts have a '--help' option to show you how to use them.

What's next?  We are now working on adding a custom ASF LDAP schema, that will allow us to record ASF specific data such as ICLA files and date of membership etc.
We will also be looking at adding support for 3rd party applications such as Hudson, and building an identity management portal where people can manage their own account.

Wednesday Feb 17, 2010

SVN performance enhancements

Tonight we enabled a pair of Intel X25-M's to serve as l2arc cache for the zfs array which contains all of our svn repositories.  Over the next few hours as these SSD's start serving files from cache, the responsiveness and overall performance of svn on eris (our master US-based server) should be noticably better than it has been lately.

In addition we are planning to install 16GB of extra RAM into eris to improve zfs performance even further, but for now we are hopeful that committers will appreciate the performance we've added tonight.


 

Monday Nov 09, 2009

What can the ASF Buildbot do for your project?

The below information has just been published to the main  ASF Buildbot URI ci.apache.org/buildbot.html.

A summary of just some of the things the ASF Buildbot can do for your project:

  • Perform per commit build & test runs for your project
  • Not just svn! - Buildbot can pull in from your Git/Mercurial branches too!
  • Build and Deploy your website to a staging area for review
  • Build and Deploy your website to mino (people) for syncing live
  • Automatically Build and Deploy Snapshots to Nexus staging area.
  • Create Nightly and historical zipped/tarred snapshot builds for download
  • Builds can be triggered manually from within your own freenode #IRC Channel
  • An IRCBot can report on success/failures of a build instantly
  • Build Success/Failures can go to your dev/notification mailing list
  • Perform multiple builds of an svn/git commit on multiple platforms asyncronously
  • ASF Buildbot uses the latest RAT build to check for license header issues for all your files.
  • RAT Reports are published live instantly to ci.apache.org/$project/rat-report.[txt|html]
  • As indicated above, plain text or html versions of RAT reports are published.
  • [Coming Soon] - RAT Reports sent to your dev list, only new failures will be listed.
  • [Coming Soon] - Email a patch with inserted ASL 2.0 Headers into your failed files!!
  • Currently Buildbot has Ubuntu 8.04, 9.04 and Windows Server 2008 Slaves
  • [Coming Soon] - ASF Buildbot will soon have Solaris, FreeBSD 8 and Windows 7 Slaves

Dont see a feature that you need? Join the builds.at.apache.org mailing list and request it now, or file a Jira Ticket.

Help is always on hand on the builds.at.apache.org mailing list for any problems or build configuration issues/requests. Or try the #asftest channel on irc.freenode.net for live support.

So now you want your project to use Buildbot? No problem, best way is to file a Jira Ticket. and count to 10 (well maybe a bit longer but it wont be long before you are up and running).

Calendar

Search

Hot Blogs (today's hits)

Tag Cloud

Categories

Feeds

Links

Navigation