We are very pleased to announce that the 2022 European HTCondor Workshop will be held from Tuesday 11th October to Friday 14th October. Save the dates!
Thank you to all in-person and virtual participants in HTCondor Week 2022. Over the course of the event we had over 40 talks spanning tutorials, applications and science domains using HTCSS.
We hope to see you next year!
Starting with HTCondor 9.2.0, HTCondor will adopt a new version scheme to facilitate quicker patch releases. The version number will still retain the MAJOR.MINOR.PATCH form with slightly different meanings.
The European HTCondor Workshop 2021 will be held next week, Monday Sept 20 through Friday Sept 24 as a purely online event via videoconference. It is still not too late to register - so if interested, please hurry up! Some slots for contributed presentations are still available. Please let us know about your projects, experience, plans and issues with HTCondor. It’s a way to give back to the community in return for what you received from it.
The European HTCondor Workshop 2021 will be held Monday Sept 20 through Friday Sept 24 as a purely online event via videoconference. The workshop is an opportunity for novice and experienced users of HTCondor to learn, get help and have exchanges between them and with the HTCondor developers and experts. This event is organized by the European community (and optimized for European timezones) but everyone worldwide is welcomed to attend and participate.
We are working to retire the use of the Grid Community Toolkit libraries (or GCT, which is a fork of the former Globus Toolkit) in the HTCondor Software Suite. Functionality provided by the GCT, such as GSI authentication, is rapidly being replaced by the use of IDTOKENS, SciTokens, and plain SSL within the ecosystem. Especially if your organization relies on GSI, please see this article on the htcondor-wiki that details our timeline and milestones for replacing GCT in HTCondor and the HTCondor-CE.
HTCondor Week 2021 is just over three weeks away! We are pleased to announce a preliminary schedule: https://agenda.hep.wisc.edu/event/1579/timetable/#20210524.detailed This will likely have some adjustments before the actual event, but it should still give people an overall sense of what to expect. Also, a reminder that registration is free but mandatory. Only registered users will be sent the meeting URL and password. You can sign up from our website: https://agenda.hep.wisc.edu/event/1579/registrations/239/
On March 26 2021 at 3:30pm the HTCondor v8.9.12 release was pulled down from our website after significant backwards-compatibility problems were discovered. We plan to fix these problems in v8.9.13 to be released by March 30. We apologize for any inconvenience.
When Greg Daues at the National Center for Supercomputing Applications (NCSA) needed to transfer 460 Terabytes of NCSA files from the National Institute of Nuclear and Particle Physics (IN2P3) in Lyon, France to Urbana, Illinois, for a project they were working with FNAL, CC-IN2P3 and the Rubin Data Production team, he turned to the HTCondor High Throughput system, not to run computationally intensive jobs, as many do, but to manage the hundreds of thousands of I/O bound transfers.
Save the date and register now for another Campus Workshop on distributed high-throughput computing (dHTC), February 8-9, 2021, offered by the Partnership to Advance Throughput computing (PATh). All campus cyberinfrastructure (CI) staff are invited to attend!
Feb 8: Training on Using and Facilitating the Use of dHTC and the Open Science Grid (OSG), 2-5pm ET. seats limited; register ASAP!
Feb 9: dHTC Virtual Office Hours with breakout rooms for discussion on all things dHTC, including OSG services for campuses and CC* awards, 2-5pm ET. unlimited seats
While there is no fee for either day, registration is required for participants to receive virtual meeting room details and instructions for training accounts via email. Please feel free to send any questions about the event to email@example.com.
Recognizing the University of Wisconsin-Madison’s leadership role in research computing, the National Science Foundation announced this month that the Madison campus will be home to a five-year, 22.5 million dollar initiative to advance high-throughput computing. The Partnership to Advance Throughput Computing (PATh) is driven by the growing need for throughput computing across the entire spectrum of research institutions and disciplines. The partnership will advance computing technologies and extend adoption of these technologies by researchers and educators. Read the full article: https://morgridge.org/story/national-science-foundation-establishes-a-partnership-to-advance-throughput-computing/
The IEEE Computer Society Technical Committee on Distributed Processing (TCDP) has named Professor Miron Livny from University of Wisconsin as the recipient of the 2020 Outstanding Technical Achievement Award. See https://tc.computer.org/tcdp/awardrecipients/.
The initial HTCondor research paper from 1988, “Condor – A Hunter of Idle Workstations”, has won the 2020 IEEE TDCP ICDCS High Impact Paper Award. “To celebrate the 40 years anniversary of the International Conference on Distributed Systems (ICDCS), the IEEE Technical Committee on Distributed Processing (TCDP) Award Committee has selected ICDCS High Impact Papers that have profoundly influenced the field of distributed computing systems.” See https://tc.computer.org/tcdp/awardrecipients/
On July 21, the CHTC will be hosting virtual office hours for the HTCondor-CE from 8-10am CDT. The HTCondor-CE software is a Compute Entrypoint (CE) based on HTCondor for sites that are part of a larger computing grid (http://htcondor-ce.org). During these office hours, we will be hosting a virtual room for general discussion with additional breakout rooms where you can meet with CHTC team members to discuss specific topics and questions. No appointment needed, just drop in! Office hour coordinates will be made available on the day of the event on this page: https://research.cs.wisc.edu/htcondor/office-hours/2020-07-21-HTCondorCE
The HTCondor team is re-releasing the HTCondor 8.9.5 deb packages for Debian and Ubuntu. An error caused the Python bindings to be placed in the wrong directory, rendering them nonfunctional.
A recently posted article at Amazon explains how the Univ of Manchester dynamically grows the size of their HTCondor pool with resources from AWS by using the HTCondor Annex tool.
The article “A cost effective ExaFLOP hour in the Clouds for IceCube” discusses how an existing HTCondor pool was augmented with resources dynamically acquired from AWS, Azure, Google, Open Science Grid, and on-prem resources to perform science for the IceCube project. The objective, on top of (obviously) the science output, was to demonstrate how much compute can someone integrate during a regular working day, using only the two most cost effective SKUs for each Cloud provider.
We invite HTCondor users, administrators, and developers to HTCondor Week 2020, our annual HTCondor user conference, May 19-20, 2020. This will be a free, online event.
Read about how at SC19 UCSD got over 50k GPUs from the three largest Cloud providers, and integrated them in a single a HTCondor pool to run production science jobs. At peak, this cloud-based dynamic HTCondor cluster provided more than 90% of the performance of the #1 TOP100 HPC supercomputer.
The CREAM working group has recently announced official support for the CREAM-CE will cease in December 2020. We are soliciting feedback on the HTCondor and Open Science Grid (OSG) transition plan. Please see this post to the htcondor-users email list for more information, and please email any concerns to firstname.lastname@example.org and/or email@example.com.
The 2018 European HTCondor Workshop will be held Tuesday Sept 4 through Friday Sept 7, 2018, at the Rutherford Appleton Laboratory in Oxfordshire, UK. This is a chance to find out more about HTCondor from the developers, and also to provide feedback and learn from other HTCondor users. Participation is open to all organizations and persons interested in HTCondor. <p>The registration deadline is Tuesday 21 Aug – register here (this web site also has much more information about the conference). You will receive a discount if you register by July 31. <p>If you’re planning to attend, please consider speaking – we’d like to hear about your project and how you are using HTCondor. Abstracts are accepted via the conference page linked above; follow the “Call for Abstracts” link and select “Submit new abstract”.
This aritcle describes how HTCondor delivered more than 9 million CPU hours to help understand the genetics of human disease.
We invite HTCondor users, administrators, and developers to HTCondor Week 2018, our annual HTCondor user conference, in beautiful Madison, Wisconsin, May 21-24, 2018. HTCondor Week features tutorials and talks from HTCondor developers, administrators, and users. It also provides an opportunity for one-on-one or small group collaborations throughout the week. In addition, the HEPiX Spring 2018 Workshop will also take place in Madison the preceding week.
HPCWire recently published an aritcle on how HTCondor enables NCSA to take raw data from the Dark Energy camera telescope and process and disseminate the results within hours of observations occurring.
The Laser Interferometer Gravitational-Wave Observatory (LIGO) unlocked the final door to Albert Einstein’s Theory of Relativity, winning the Nobel prize this week and proving that gravitational waves produce ripples through space and time. Since 2004, HTCondor has been a core part of the data analysis effort. See this recent article , as well as this one, for more details.
Researchers and software engineers at the Intel-Broad Center for Genomic Data Engineering build, optimize, and widely share new tools and infrastructure that will help scientists integrate and process genomic data. The project is optimizing best practices in hardware and software for genome analytics to make it possible to combine and use research data sets that reside on private, public, and hybrid clouds, and have recently identified HTCondor on their web site as an open source framework well suited for genomics analytics.
The HTCondor team has released updated RPMs for HTCondor versions 8.4.12, 8.6.5, and 8.7.2 running on Enterprise Linux 7. In the recent Red Hat 7.4 release, the updated SELinux targeted policy package prevented HTCondor’s SELinux policy module from loading. Red Hat Enterprise Linux 7.4 systems running with SELinux enabled will require this update for HTCondor to function properly.
The 2017 European HTCondor workshop will be held at DESY in Hamburg from Tuesday 6 June through Friday 9 June. This is a chance to find out more about HTCondor from the developers, and also to provide feedback and learn from other HTCondor users. Participation is open to all organizations and persons interested in HTCondor. <p>The registration deadline is Tuesday 30 May – register here (this web site also has much more information about the conference). <p>If you’re planning to attend, please consider speaking – we’d like to hear about your project and how you are using HTCondor. Abstracts will be accepted through Friday 26 May via the conference page linked above; follow the “Call for Abstracts” link and select “Submit new abstract”.
An updated version of the full HTCondor Week schedule is now available. You can access the schedules for each day from the HTCondor Week page. Hopefully this will be quite close to the final schedule. <p>Note that there will be a reception sponsored by Cycle Computing on Wednesday from 6-7pm. You can see the details at the Wednesday schedule page. <p>Please remember that the registration deadline for HTCondor Week is Tuesday, April 25 (register here).
Monday, April 17 is the last day to register for HTCondor week. If you’re planning to attend but haven’t yet registered, please do so as soon as possible at the registration page. A few other notes: <ul> <li>We’ve added a preliminary list of talks and tutorials to the overview page and started on a preliminary schedule for Tuesday. <li>The hotel room block at the DoubleTree expires this Friday, April 7 (see the local arrangements page for details. Note that the room block at the Fluno Center has now expired. <li>We are still looking for more speakers. If you are interested in presenting, please email us at firstname.lastname@example.org (see the speaker information page). </ul>
Registration is now open for both HTCondor Week 2017 (held in Madison, Wisconsin, USA) and the 2017 European HTCondor workshop (held in Hamburg, Germany). The registration deadline for HTCondor Week is April 17; the registration deadline for the European HTCondor workshop is May 30. If you’re planning to attend, please register as soon as possible.
The HTCondor Week 2017 web page is now available. This web page includes information about nearby hotel options (note that HTCondor Week 2017 will be held at a different location than the last few HTCondor Weeks, so that may affect your hotel choice). Registration should be open by the end of February; at this time we anticipate a registration fee of $85/day.
At SC16, the HTCondor Team, Google, and Fermilab demonstrated a 160k-core cloud-based elastic compute cluster. This cluster uses resources from the Google Cloud Platform provisioned and managed by HTCondor as part of Fermilab’s HEPCloud facility. The following article gives more information on this compute cluster, and discusses how the bursty nature of computational demands is making the use of cloud resources increasingly important for scientific computing. Find out more information about the Google Cloud Platform here.
The 2017 European HTCondor workshop will be held Tuesday, June 6 through Friday, June 9, 2017 at DESY in Hamburg, Germany. We will provide more details as they become available.
This article explains how the Clemson Center for Geospatial Technologies (CCGT) was able to use HTCondor to help a student analyze large amounts of GIS (Geographic Information System) data. The article contains a good explanation of how the data was divided up in such a way as to allow it to be processed using an HTCondor pool. Using HTCondor allowed the data to be analyzed in approximately 3 hours, as opposed to the 4.11 days it would have taken on a single computer.
HTCondor Week 2017 will be held Tuesday, May 2 through Friday, May 5, 2017 at the Fluno Center on the University of Wisconsin- Madison campus. We will provide more details as they become available.
Just a reminder that the Tuesday tutorials at HTCondor Week are free to UW-Madison faculty, staff and students. If you have any interest in using HTCondor and Center for High Throughput Computing resources, we’d love to see you on Tuesday. <p>However, as everyone knows, there ain’t no such thing as a free lunch – people who attend the tutorials without paying are not eligible for the lunch (you can still get snacks at the breaks, though).
The registration deadline for HTCondor Week 2016 has been extended to Monday, May 9. This will be the final extension – we need to finalize attendance numbers for the caterers. <p>Also note that Wednesday, April 27, is the last day to be guaranteed to get the conference rate at the DoubleTree Hotel.
This Amazon Web services blog post explains how scientists at Fermilab (a Tier 1 data center for the CMS experiment at the LHC) use HTCondor and Amazon Web Services to elastically adapt their computational capacity to changing requirements. Fermilab added 58,000 cores with HTCondor and AWS, allowing them to simulate 500 million events in 10 days using 2.9 million jobs. Adding cores dynamically improves cost efficiency by provisioning resources only when they are needed.
We want to invite you to HTCondor Week 2016, our annual HTCondor user conference, in beautiful Madison, Wisconsin, May 17-20, 2016. We will again host HTCondor Week at the Wisconsin Institutes for Discovery, a state-of-the-art facility for academic and private research specifically designed to foster private and public collaboration. It provides HTCondor Week attendees with a compelling environment in which to attend tutorials and talks from HTCondor developers and users like you. It also provides many comfortable spaces for one-on-one or small group collaborations throughout the week.
Our current development series, 8.5, is well underway toward our upcoming production release. When you attend, you will learn how to take advantage of the latest features. You'll also get a peek into our longer term development plans--something you can only get at HTCondor Week!
We will have a variety of in-depth tutorials and talks where you can not only learn more about HTCondor, but you can also learn how other people are using and deploying HTCondor. Best of all, you can establish contacts and learn best practices from people in industry, government, and academia who are using HTCondor to solve hard problems, many of which may be similar to those you are facing.
Speaking of learning from the community, we'd love to have you give a talk at HTCondor Week. Talks are 15-20 minutes long and are a great way share your ideas and get feedback from the community. If you have a compelling use of HTCondor you'd like to share, see our speaker's page.
You can register, get hotel details, and see the agenda overview on the HTCondor Week 2016 site.
This New Universe Daily news article discusses the collaboration between the HTCondor team at UW-Madison and the LIGO team at UW-Milwaukee, and how the HTCondor software was critical to the discovery of gravitational waves and will continue to be vital as LIGO moves forward. “For 20 years, LIGO was trying to find a needle in a haystack. Now we’re going to build a needle detection factory,” said Peter Couvares, a Senior Scientist with the LIGO project.
This Morgridge Institute news article explains the rich back-story of HTCondor’s role behind the recent announcement that scientists from the Laser Interferometer Gravitational-Wave Observatory (LIGO) unlocked the final door to Einstein’s Theory of Relativity. More than 700 LIGO scientists have used HTCondor to run complex data analysis workflows, accumulating 50 million core-hours in the past six months alone.
This Open Science Grid news article discusses how the Baker Lab at the University of Washington has used HTCondor and the OSG to successfully simulate the cylindrical TIM-barrel (triosephosphate isomerase-barrel) protein fold, which has been a challenge for nearly 30 years. TIM-barrel protein folds occur widely in enzymes, meaning that understanding them is important for applications such as the development of new vaccines. The Baker Lab performed about 2.4 million core hours of computation on the OSG in 2015.
This Open Science Grid news article discusses how a pilot project at the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory has used HTCondor-G to incorporate virtual machines from Amazon’s Amazon’s Elastic Compute Cloud (EC2) spot market into a scientific computation platform. The ATLAS experiment is moving towards using commercial clouds for computation as budget constraints make maintaining dedicated data centers more difficult.
This Open Science Grid news article discusses the role of HTCondor, Pegasus and the Open Science Grid in the recently-announced discovery of gravitational waves by the Laser Interferometer Gravitational-Wave Observatory (LIGO). LIGO used a single HTCondor-based system to run computations across LIGO Data Grid, OSG and Extreme Science and Engineering Discovery Environment (XSEDE)-based resources, and consumed 3,956,910 compute hours on OSG.
REGISTRATION IS NOW OPEN (until 22nd Feb.)!! The workshop fee is 80 EUROS (VAT included), which covers the lunches and all coffee breaks along the event. The list of recommended hotels, instructions for fee payment, and how to get to the venue is available in the workshop homepage. <ul> <li>Where: Barcelona, Spain, at the ALBA Synchrotron Facility <li>When: Monday February 29 2016 through Friday March 4 2016. <li>Workshop homepage: https://indico.cern.ch/e/Spring2016HTCondorWorkshop </ul>
The HTCondor team is re-releasing the RPMs for 8.4.3 and 8.5.1. A recent change to correct problems with Standard Universe in the RPM packaging resulted in unoptimized binaries to be packaged. The new RPMs have optimized binaries.
There will be a workshop for HTCondor and ARC CE users in Barcelona, Spain on Feb 29 2016 — March 4 2016
- Where: Barcelona, Spain, at the ALBA Synchrotron Facility
- When: Monday February 29 2016 through Friday March 4 2016.
- Workshop homepage: https://indico.cern.ch/e/Spring2016HTCondorWorkshop </ul>
Save the dates! The HTCondor team, the NorduGrid collaboration, and the Port d'Informació. Cientifica (PIC) + ALBA Synchrotron present a workshop for the users and administrators of HTCondor, the HTCondor CE and the ARC CE to learn and connect in Barcelona, Spain. This is an opportunity for novice and experienced system administrators and users to learn, get help and have exchanges between themselves and with the developers and experts.
The workshop will offer:
- Introductory tutorials on using and administrating HTCondor, HTCondor CE and ARC CE
- Technical talks on usage and deployment from developers and your fellow users
- Talks and tutorials on recent features, configuration, and roadmap
- The opportunity to meet with HTCondor developers, ARC CE developers, and other experts for non-structured office hours consultancy </ul>
Speaking of learning from the community, we would like to hear from people interested in presenting at this workshop. If you have a use case or best practices involving HTCondor, HTCondor CE or ARC CE you'd like to share, please send us an email at hepix-condorworkshop2016-interest (at) cern (dot) ch .
Information on registration will be available soon at: https://indico.cern.ch/e/Spring2016HTCondorWorkshop
Timetable overview: Monday, Tuesday, and Wednesday presentations will be dedicated to HTCondor and HTCondor CE, while Thursday's presentations dedicated to ARC CE. HTCondor and ARC CE developers and experts will be available on Thursday and Friday morning for non-structured "office hours" consultancy and individual discussions with users.
We hope to see you in Barcelona!
This UW Madison news article discusses how a new computational approach permits evaluation of hydrogen data using software, which may replace the time consuming manual approach. Putting HTCondor into the mix scales well given the vast quantities of data expected as Square Kilometer Array radio telescope is realized.
We invite HTCondor users, administrators, and developers to HTCondor Week 2015, our annual HTCondor user conference, in beautiful Madison, Wisconsin, May 19-22, 2015. HTCondor Week features tutorials and talks from HTCondor developers, administrators, and users. It also provides an opportunity for one-on-one or small group collaborations throughout the week.
HTCondor Week 2015 is May 19–22, 2015 in Madison, Wisconsin. Join other users, administrators, and deveopers for the opportunity to exchange ideas and experiences, to learn about the latest research, to experience live demos, and to influence our short and long term research and development directions.
This UW Madison news article describes the paleobiology application of the DeepDive system, which used text-mining techniques to extract data from publications and build a database. The quality of the database contents equaled that achieved by scientists. HTCondor helped to provide the million hours of compute time needed to build the database from tens of thousands of publications.
The HTCondor team, together with Worldwide LHC Computing Grid Deployment Board, is offering a four-day workshop the 8th through the 11th of December at CERN in Geneva, Switzerland, for HTCondor pool administrators. Topics envisioned include pool configuration, management, theory of operation, sharing of best practices and experiences, and monitoring of individual HTCondor batch pools. Special subjects of emphasis may include: scheduling, pool policy selection, performance tuning, debugging failures, and previews of upcoming features of HTCondor. Time will be provided for question and answer sessions. Attendees should have some knowledge of Linux and network administration, but no HTCondor experience will be required. See the workshop’s web site to register and for more details.
The HTCondor team, together with Worldwide LHC Computing Grid Deployment Board, plans to offer a two-day workshop the 8th and 9th of December at CERN in Geneva, Switzerland. HTCondor experts will also be available at CERN on Dec 10th and 11th for less structured one-on-one interaction and problem-solving. This workshop will provide lectures and presentations aimed towards both new and experienced administrators of HTCondor pools setup to manage local compute clusters. See the workshop’s web site for more details.
In this Novartis presentation at an Amazon Web Services Summit is a description of the problem and its solution, in which Cycle Computing and HTCondor enable the scheduling of 10,600 Amazon EC2 spot instances.
HTCondor Week attendees are interested in hearing about your efforts during our annual meeting, April 28-30. Please consider presenting. Details for adding your talk to the schedule are given in this page of Information for HTCondor Week Speakers.
HTCondor Week 2014, our annual HTCondor user conference, is scheduled for April 28-April 30, 2014. We will again host HTCondor Week at the Wisconsin Institutes for Discovery in beautiful Madison, Wisconsin.
In a change from previous years, technical talks will begin on Monday. See the web site for current details.
At HTCondor Week, you can look forward to:
- Technical talks on usage and deployment from developers and your fellow users
- Talks and tutorials on new HTCondor features
- Talks on future plans for HTCondor
- Introductory tutorials on using and administrating HTCondor
- The opportunity to meet with HTCondor developers and other users
Information on registration and scheduling will be available soon.
as reported by Ian Cottam in his blog for the Software Sustainability Institute at the University of Manchester. Ian appreciates that HTCondor is easy to install, runs on many platforms, and contains features such as DAGMan to order the execution of sets of jobs, making the system even more useful.
International Science Grid This Week (ISGTW) reports on two research efforts in the area of brain research. The work of Ned Kalin is highlighted in this article on brain circuitry and mechanisms underlying anxiety. And, high throughput computing permits the research of Mike Koenigs in psychopathy to analyze data in days, when without HTC, it might take years.
Miron Livny, Director of the UW-Madison Center for High Throughput Computing and founder of the HTCondor workload management system, has been honored with the 2013 High Performance Parallel and Distributed Computing (HPDC) Achievement Award.
Bosco is a client for Linux and Mac operating systems for submitting jobs to remote batch systems without administrator assistance. It is designed for end-users, and only requires ssh access to one or more cluster front-ends. Target clusters can be HTCondor, LSF, PBS, SGE or SLURM managed resources. The new Bosco 1.2 release is much easier to install, will handle more jobs, will send clearer error messages, and makes it easier to specify the memory you need inside the clusters you connect to.
An article published at HPCwire highlights research out of Brigham Young University with a goal to demonstrate an alternative model to High Performance Computing (HPC) for water resource stakeholders by leveraging High Throughput Computing (HTC) with HTCondor.
The HPCwire: CERN, Google Drive Future of Global Science Initiatives article describes the computing environment of the ATLAS project at CERN. HTCondor and now Google Compute Engine aid the extensive collision analysis effort for ATLAS.
Contributor Miha Ahronovitz traces the history of high throughput computing (HTC), noting the particularly enthusiastic response from the high energy physics world and the role of HTC in such important discoveries as the Higgs boson. As one of the biggest generators of data, this community has been dealing with the “big data” deluge long before “big data” assumed its position as the buzzword du jour. Read more at HPC In the Cloud.
We want to invite you to HTCondor Week 2013 , our annual HTCondor user conference, in beautiful Madison, WI April 29-May 3, 2013. (HTCondor Week was formerly named Condor Week, matching a name change for the software.) We will again host HTCondor Week at the Wisconsin Institutes for Discovery, a state of the art facility for academic and private research specifically designed to foster private and public collaboration. It provides HTCondor Week attendees a compelling environment to attend tutorials and talks from HTCondor developers and users like you. It also provides many comfortable spaces for one-on-one or small group collaborations throughout the week. This year we continue our partnership with the Paradyn Tools Project, making this year Paradyn/HTCondor Week 2013. There will be a full slate of tutorials and talk for both HTCondor and Paradyn.
Our current development series, 7.9, is well underway toward our upcoming production release. When you attend, you will learn how to take advantage of the latest features such as per-job PID namespaces, cgroup enforced resource limits, Python bindings, CPU affinity, BOSCO for submitting jobs to remote batch systems without administrator assistance, EC2 spot instance support, and a variety of speed and memory optimizations. You'll also get a peek into our longer term development plans--something you can only get at HTCondor Week!
We will have a variety of in-depth tutorials, talks, and panels where you can not only learn more about HTCondor, but you can also learn how other people are using and deploying HTCondor. Best of all you can establish contacts and learn best practices from people in industry, government and academia who are using HTCondor to solve hard problems, many of which may be similar to those facing you.
Speaking of learning from the community, we'd love to have you give a talk at HTCondor Week. Talks are 20 minutes long and are a great way share your ideas and get feedback from the community. If you have a compelling use of HTCondor you'd like to share, let Alan De Smet know (email@example.com) and he'll help you out. More information on speaking at HTCondor Week is available at the HTCondor Week web site.
You can register, get the hotel details and see the agenda overview on the HTCondor Week 2013 site. See you soon in Madison!
In order to resolve a lawsuit challenging the University of Wisconsin-Madison’s use of the “Condor” trademark, the University has agreed to begin referring to its Condor software as “HTCondor” (pronounced “aitch-tee-condor”). The letters at the start of the new name (“HT”) derive from the software’s primary objective: to enable high throughput computing, often abbreviated as HTC. Starting in the end of October and through November of this year, you will begin to see this change reflected on our web site, documentation, web URLs, email lists, and wiki. While the name of the software is changing, nothing about the naming or usage of the command-line tools, APIs, environment variables, or source code will change. Portals, procedures, scripts, gateways, and other code built on top of the Condor software should not have to change at all when HTCondor is installed.
The Condor Team is pleased to announce the release of Condor 7.8.6. which contains an important security fix that was incorrectly documented as being in the 7.8.5 release. This release is otherwise identical to the 7.8.5 release. Affected users should upgrade as soon as possible. More details on the security issue can be found here. Condor binaries and source code are available from our Downloads page.
Regrettably, due to an error in building 7.6.8, it is not a valid release and has been pulled from the web. Please update to version 7.6.9 or 7.8.2 instead to address the security issue posted yesterday. More details on the security problem can be found here, and Condor binaries and source code are available from our Downloads page.
The Condor Team is pleased to announce the release of Condor 7.8.2 and Condor 7.6.8, which fix an important security issue. All users should upgrade as soon as possible. More details on the security problem can be found here, and Condor binaries and source code are available from our Downloads page.
This UW-Madison news release describes the collaboration and contribution of Condor to the computing efforts that support research efforts across the globe.
This Department of Energy Office of Science and the National Science Foundation award to the OSG will extend the reach of OSG capabilities that support research with computing power and data storage, as detailed in this UW-Madison news release. Condor is a significant component in the distributed high-throughput OSG middleware stack.
Two papers by the Condor Team were selected as to represent the most influential papers in the history of The International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC). Details…
Red Hat Enterprise MRG support expands into the cloud as announced in this press release, by leveraging the EC2 job universe in Condor, and by maintaining supported images of Red Hat Enterprise Linux with Condor pre-installed on Amazon storage services. Red Hat MRG can schedule local grids, remote grids, virtual machines for internal clouds, and now, rented cloud infrastructure. There is also documentation of this feature.
offering increased performance, reliability, interoperability, as presented in this news release.
Center for High Throughput Computing (CHTC) becomes first Red Hat Center of Excellence Development Partner
Red Hat announced that it has expanded its technology partnership with the University of Wisconsin-Madison (UW-Madison) to establish the Center for High Throughput Computing (CHTC) as the first Red Hat Center of Excellence Development Partner. In addition, Red Hat has awarded the UW-Madison CHTC its Red Hat Cloud Leadership Award for its advancements in cloud computing based on the open source Condor project and Red Hat technologies. See the Red Hat press release for more information.
Cycle Computing used Condor to create a 10,000-core cluster on top of Amazon EC2. The virtual cluster was used for 8 hours to provide 80,000 hours of compute time for protein analysis work for Genentech. “The 10,000-core cluster that Cycle Computing set up and ran for eight hours on behalf of Genentech would have ranked at 114 on the Top 500 computing list from last November (the most current ranking), so it was not exactly a toy even if the cluster was ephemeral.”
Dr. Sorin Adam Matei and hi team are using Condor and DAGMan on TeraGrid to study Wikipedia. They are studying how collaborative, network-driven projects organize and function. Condor allowed them to harness approximately 4,800 compute hours in one day, processing 4 terabytes of information.
The Red Hat news release details the release of Enterprise MRG Grid 1.3, its grid product based on Condor. The release includes new administrative and user tools, Windows execute node support, enhanced workflow management, improved scheduling capabilities, centralized configuration management, and new virtual machine and cloud spill-over features. MRG Grid is also now fully supported for customers in North America and extended coverage is provided to customers throughout Europe. With MRG Grid 1.3, customers will gain the ability to scale to tens of thousands of devices, manage their grid in a centralized fashion, be able to provision virtual machines for grid jobs, and connect their grid to private and public clouds.
The first Condor Day Japan workshop will be held on November 4, 2010 in Akihabara, Tokyo. This one day workshop will give collaborators and users the chance to exchange ideas and experiences, learn about latest research, experience live demos, and influence short and long term research and development directions. The submission deadline is October 18, 2010. Please send inquiries to firstname.lastname@example.org .
ZDNet’s GreenTech Pastures blog reports that Purdue’s Condor-based DiaGrid helps them maximize utilization for electricity consumed. DiaGrid manages 28,000 processors across three states.
This Joint Information Systems Committee (JISC) September 2009 article “Energy Efficiency for High-Powered Research Computing” shows the quantity of research computing that can be completed while still saving power.
Researchers at Singapore’s national defense R&D organization, DSO National Labs, run evolutionary algorithms on their Condor cluster to evaluate and adapt maritime force protection tactics. This paper describes computer-evolved strategies running on a Condor cluster, applying it to the defense of commercial shipping in the face of piracy.
Purdue’s Condor-based DiaGrid was named a Top-100 IT project of 2009 by Network World. DiaGrid has 177 teraflops of capacity.
generated for both data analysis and visualization efforts with XD award winners: University of Tennessee (see the press release) and the Texas Advanced Computing Center (TACC) at the University of Texas at Austin (as well as this press release). These TeraGrid eXtremeDigital Resource (XD) awards from the NSF will enable a center for Remote Data Analysis and Visualization (RDAV) research.
The International Summer School on Grid Computing 2009, will be in July 2009 in Sophia Antipolis, Nice, France. It’s a two-week hands-on summer school course in grid technologies. You’ll get a chance to learn a lot of in-depth material not only about various grid technologies, but about the principles that underlie them and connect them together. The school will include talks from well-known grid experts in deploying, using, and developing grids. Hands-on laboratory exercises will give participants experience with widely used grid middleware. More Information
This NCSA news release highlights the work of Jim Kupsch, working for the SDSC and the UW-Madison Vulnerability Assessment project. The project performs independent security audits of grid computing codes such as Condor and MyProxy.
An article at The Capital Times briefly mentions Professor Livny and the Condor Project’s work with the Large Hadron Collider. On Professor Livny’s work, Associate Dean Terry Millar said, “The computational contributions from UW are pretty profound.”
The NSF funded Pegasus Workflow Management System, created by researchers at University of Southern California Information Sciences Institute (ISI) in collaboration with the Condor Project, has been chosen to support workflow-based analysis for the coordinating center that links genomic and epidemiologic studies. More information has been released by ISI.
European Condor Week 2008 will take place 21-24 October in Barcelona, Spain, and will be hosted by Universitat Autonoma de Barcelona. Check out the European Condor Week 2008 web page for more information. If you are interested in giving a talk and sharing your experiences please email EUCondorWeek@caos.uab.es.
Videos for five of the tutorials given at Condor Week 2008 are now available for download. The tutorials are: Using Condor: An Introduction, Administrating Condor, Condor Administrator’s How-to, Virtual Machines in Condor, and Building and Modifying Condor.
As published in the Feb 28th issue of Nature, a team led by a Purdue University researcher has achieved images of a virus in detail two times greater than had previously been achieved. This breakthrough was enabled through the use of Purdue’s Condor distributed computing grid, which comprises more than 7,000 computers.
Private sector leaders in Milwaukee and southeastern Wisconsin are trying to bridge the gap between universities and businesses through more effective use of computing and scientific resources, and their vehicle is the new Milwaukee Institute, a non-profit organization that is building a cyber infrastructure of shared, grid-based computing that leverages Condor Project technology.
Today Red Hat announced Red Hat Enterprise MRG, a distributed computing platform offering that utilizes Condor for workload management. “The University of Wisconsin is pleased to work with Red Hat around the Condor project,” said Terry Millar, Associate Dean at the UW-Madison Graduate School.
As described in the HPCwire press release, Purdue University is to become one of five High Performance Computing Operations (HPC-Ops) centers within the NSF-funded Teragrid project. Purdue has the largest academic Condor pool in the world, which provides computing cycles to Teragrid.
“When all the Clemson University students are tucked in for the night, hundreds of desktops across the campus are turned on to create a supercomputing grid that can rapidly process large amounts of data.” “Her request for a series of computations that would have taken 10 years on a regular desktop computer was completed in just a few days.”
GreenvilleOnline.com reports, “Condor and the new Palmetto Cluster enable Clemson faculty to do faster and deeper research. The technology opens doors to a realm where no Clemson researcher could go before – and research grants dollars not accessible without the new supercomputing power.”
A bug introduced in the most recent developer release, Condor v6.9.4, can cause jobs running in Condor’s Standard Universe to write corrupt data. The Condor Team has written a patch that will be included in v6.9.5 which is forthcoming; if you need the patch sooner, please contact us. This bug only impacts jobs that write binary (non-ASCII) files and are submitted to the Standard Universe.
High Throughput Computing Week in Edinburgh is a four-day event in November 2007 that will discuss several aspects of high throughput computing (HTC), including transforming a task so it can benefit from HTC and choosing technologies to deliver HTC, as well as trying some HTC systems in-person. Condor will be one of the four technologies discussed. More Details
Cardiff University announces the availability of a 1,000 processor Condor Pool to the UK National Grid Service. “Use by the University’s researchers has grown considerably in this time and has saved local researchers years of time in processing their results.” “Using Gasbor to build a model of a typical Tropoelastin molecule takes 30 hours. Using Condor the same simulation ran in just two hours.”
When Quill was first developed, it was designed to work with older versions of the PostgreSQL database server. Newer versions of PostgreSQL have stronger security features, which can be enabled in the PostgreSQL configuration, requiring no changes to the Quill daemon. We recommend that all Quill sites upgrade to the latest version of PostgreSQL (8.2), and make these easy changes to their PostgreSQL configuration. The consequences of not doing so mean that any user who can sniff the network between the Quill daemon and the PostgreSQL server can obtain the Quill database password, and make changes to the Quill database. This can change the output of condor_q and condor_history, but cannot otherwise impact Condor’s correctness or security. Otherwise unauthorized users cannot use this database password to run jobs or mutate Condor’s configuration. A second problem with the previously recommended configuration was that any user with the publicly-available read-only Quill PostgreSQL password could create new tables in the database and store information there. While this does not effect the running of Condor in any way, sites may view that as a security problem. As of Condor 6.8.6 and 6.9.4, the Condor manual has been updated to describe the more secure installation of PostgreSQL, which remedy both of the above problems. These changes include the following: Change the authentication method (the final field) in the pg_hba.conf file from “password” to “md5”. Restarting PostgreSQL is then needed for this to take effect. Only allow the quillwriter account to create tables. To do this, run the following two SQL commands as the database owner. REVOKE CREATE on SCHEMA public FROM PUBLIC; GRANT CREATE on SCHEMA public to quillwriter;
In 2006 the Durham University Department of Geography began examining ways of increasing the speed at which research results could be obtained from geophysical modelling simulations. After a period of testing lasting over 6 months, the Department of Geography has successfully implemented a distributed computing system built upon the Condor platform. Since its implementation, the Condor distributed computing network has provided a unique facility for spatial modelling which is particularly suited to Monte Carlo approaches. Thanks to the processing power offered by the Condor network, members of the Catchment, River and Hillslope Science (CRHS) group have developed methods of ultra-high resolution image processing and remote sensing which are pushing the traditional boundaries of ecological monitoring at both catchment and local scales.
After four days of presentations, tutorials, and discussion sessions, Condor Week 2007 came to a close. Red Hat presented plans to integrate Condor into Red Hat and Fedora distributions, and to provide enterprise-level support for Condor installations. Government labs reported results from their deployment efforts over the past year; for example, a production Condor installation at Brookhaven National Labs consisting of over 4800 machines ran 2.8 million jobs in the past 3 months, delivering 6.2 million wallclock hours to over 400 scientists. IBM reported on an ongoing project to bring Condor and Blue Gene technologies together, in order to enable High Throughput style computing on IBM’s popular Blue Gene supercomputer. The Condor Team reported on the scalability enhancements in the Condor 6.9 development series. Check out the 35+ presentations delivered this week.
GRIDtoday reports in the article “Clemson Researchers Get Boost From Condor” that Clemson University has deployed a campus-wide computing grid built on Condor. “‘The Condor grid has enabled me to conduct my research, without a doubt,’ [Assistant Professor] Kurz said. ‘Before using the campus grid, I was completely without hope of completing the computational studies that my research required. As soon as I saw hundreds of my jobs running on the campus grid, I started sending love notes to the Condor team at Clemson.’”
The fifth in the highly successful series of International Summer Schools in Grid Computing will be held at Gripsholmsviken Hotell & Konferens of Mariefred, Sweden, near Stockholm, from 8th to 20th July 2007. The school builds on the integrated curriculum developed over the last few years which brings together the leading grid technologies from around the world, presented by leading figures, and gives students a unique opportunity to study these technologies in depth side by side. More information
GRIDtoday reports in the article “Protein Wranglers” that NCSA researchers are using Condor-G to streamline their workflow. “They were spending quite a bit of time doing relatively trivial management tasks, such as moving data back and forth, or resubmitting failed jobs,’ says Kufrin. After gaining familiarity with the existing ‘human-managed’ tasks that are required to carry out lengthy, computationally intensive simulations of this nature, the team identified Condor-G, an existing, proven Grid-enabled implementation of Condor, as a possible solution.”
Purdue University distributes its computing jobs across the university using Condor. “At Purdue, we’re harvesting computer downtime and putting it to good use” says Gerry McCartney, Purdue’s interim vice president for information technology and chief information officer.
Banesto is Spain’s third largest bank in volume of managed resources, and caters to over 3,000,000 customers. With the help of Cediant, Banesto has installed a Condor cluster to replace the work previously performed by a monolithic SMP. As a result, the time it takes to evaluate each portfolio has been reduced by 75%. Read more in an announcement made on the condor-users email list.
|You may have heard about the fantastic computer animation in "The Wild", a Disney film that appeared in theaters last Spring and was recently released on DVD. What you may not have heard is that Condor was used to assist with the tremendous effort of managing over 75 million renders. Read a nice letter we received from Leo Chan and Jason Stowe, film Technology Supervisor and Condor Lead, respectively.|
Author Irfan Habib wrote in Linux Journal magazine his experience getting started with Condor. He concludes, “Condor provides the unique possibility of using our current computing infrastructure and investments to target processing of jobs that are simply beyond the capabilities of our most powerful systems… Condor is not only a research toy, but also a piece of robust open-source software that solves real-world problems.”
The National Science Foundation (NSF) and the Department of Energy’s (DOE) Office of Science announced today that they have joined forces to fund a five-year, million program to operate and expand upon the two-year-old national grid. This project collectively taps into the power of thousands of processors distributed across more than 30 participating universities and federal research laboratories. UW-Madison computer scientist Miron Livny, leader of the Condor Project, is principal investigator of OSG and will be in charge of building, maintaining, and coordinating software activities. [OSG Press Release][DDJ Article][Badger Herald Article]
Author M. Shuaib Khan offers a brief introduction to setting up Condor at Linux.com. He writes, “Condor is a powerful yet easy-to-use software system for managing a cluster of workstations.”
IBM developerWorks has published a very nice tutorial on using Condor’s web services interface (Birdbath). Jeff Mausolf, IBM Application Architect, states “This tutorial is intended to introduce the Web services interface of Condor. We’ll develop a Java technology-based Web services client and demonstrate the major functions of Condor exposed to clients through Web services. The Web services client will submit, monitor, and control jobs in a Condor environment.”
“Sleeping Computers Unravel Genetic Diseases.” “Now, with the help of the Condor middleware system, Superlink-Online is running in parallel on 200 computers at the Technion and 3,000 at the University of Wisconsin-Madison.” “‘Over the last half year, dozens of geneticists around the world have used Superlink-Online, and thousands of runs – totaling 70 computer years – have been recorded,’ says Professor Assaf Schuster, head of the Technion’s Distributed Systems Laboratory, which developed Superlink-Online’s computational infrastructure.
The Wisconsin State Journal briefly covered Condor Week 2006. “Condor gives UW-Madison a real edge in any competitive research, including physics, biotechnology, chemistry and engineering, said Guri Sohi, computer sciences department chairman.”
Registration is now open for European Condor Week 2006. This second European Condor Week is a four day event that gives Condor collaborators and users in the chance to exchange ideas and experiences, learn about latest research, signup for detailed tutorials, and celebrate 10 years of collaboration between the University of Wisconsin-Madison Condor Team and the Italian Instituto Nazionale di Fisica Nucleare (INFN - the National Institute of Nuclear Physics). Please join us!
In an interview for GRIDtoday, Brooklin Gore discusses Micron’s Condor-based grid. Gore’s comments included “We have 11 ‘pools’ (individual grids, all connected via a LAN or WAN) comprising over 11,000 processors at seven sites in four countries. We selected the Condor High Throughput Computing system because it ran on all the platforms we were interested in, met our configuration needs, was widely used and open source yet well supported.”
An article in Science Grid This Week describes how Condor combined with resources from the Open Science Grid and the Univ of Wisconsin Condor Pool provided over 215 CPU years in less than two months towards a discovery eagerly anticipated by particle physicists around the world.
If you are interested in learning about grid technology (including Condor) from leading authorities in the field, we encourage you to investigate the International Summer School on Grid Computing. The School will include lectures on the principles, technologies, experience and exploitation of Grids. Lectures will also review the research horizon and report recent significant successes. Lectures will be given in the mornings. In the afternoons the practical exercises will take place on the equipment installed at the School site in Ischia, Italy (near Naples). The work will be challenging but rewarding.
Communicating outside the flock, Part 2: Integrate grid resources with Condor-G plus Globus Toolkit" at IBM's developerWorks
IBM IT Architect Jeff Mausolf writes, “The Globus Toolkit provides a grid security infrastructure. By augmenting this infrastructure with the job submission, management, and control features of Condor, we can create a grid that extends beyond the Condor pool to include resources controlled by a number of resource managers, such as LoadLeveler, Platform LSF or PBS.”
The goal of the ETICS (eInfrastructure for Testing, Integration and Configuration of Software) project is to improve the quality of Grid and distributed software by offering a practical quality assurance process to software projects, based on a build and test service. Please see the news release.
Enterprise users of Condor are joining together to form a group. If you are a business user of Condor you may be interested to learn that an effort is underway to form a Condor Enterprise Users Group. The goal of this community is to: 1- Focus on the unique needs of enterprise/commercial/business users of Condor; 2- Share best practices in the enterprise Grid computing space; 3- Collaborate on Condor features needed to better support the enterprise space; 4- Discuss ways to support the Condor project for enterprise-focused activities. To learn more, read the announcement posted to the condor-users list. Subscribe to the group at mail-lists/.
IBM IT Architect Jeff Mausolf writes, “In this article, we will look at how you can use Condor to simplify the tasks associated with job submission, monitoring, and control in a Globus environment. In Part 2, we will look at how you can use Condor’s matchmaking to make intelligent scheduling decisions based on job requirements and resource information, and then leverage the remote resource access capabilities provided by Globus to submit jobs to resources that are not part of the Condor pool.” Also available is “Part 2: Integrate grid resources with Condor-G plus Globus Toolkit.”
Article author Bruno Gonçalves says “By adding the Condor clustering software we turn this set of machines into a computing cluster that can perform high-throughput scientific computation on a large scale.”
entitled “Minister Dion Launches WindScope To Support Government’s Wind Energy Commitment”. The new WindScope software that utilizes Condor allows users to determine the ideal location to install wind turbines.
in the CIO Update for a discussion of The Hartford’s recent work with Condor. Chris Brown, director of advanced technologies at Hartford Life said, “But the alternatives were expensive, and the ability to scale with a grid was much better,” and “It wasn’t the principal reason we for building our grid, but versus a more conventional solution, the grid has saved us millions of dollars.”
in Brooklin Gore’s article in Grid Today.
In Science Grid This Week: “The Condor idea, which had its root in Livny’s Ph.D. research on distributed computing systems, is that users with computing jobs to run and not enough resources on their desktop should be connected to available resources in the same room or across the globe.” hpcu
“Alain Roy: Providing Virtually Foolproof Middleware Access,” by Katie Yurkewicz in Science Grid This Week profiles Condor staff member Alain Roy.
|Read about the Render Queue Integration on Cirque||Digital, LLC’s Products web page</a>. From the page “With GDI||Explorer you can submit, manage and prioritize the rendering of your 2D and 3D files directly on an unlimited number of processors using the powerful - yet free - Condor render queue. – The power of all processors on your network is at your fingertips!”|
at the Wisconsin Technology Network. “The University of Wisconsin’s Condor software project will provide a component of a cutting-edge grid computing system at European research heavyweight CERN, the IDG news service reports.” “After finding no commercial grid applications that satisfied all its needs, CERN took to cobbling together a system from a variety of sources, starting with the Globus Toolkit from the Globus Alliance and using the Condor project’s scheduling software.”
in Computerworld. “Instead, CERN based its grid on the Globus Toolkit from the Globus Alliance, adding scheduling software from the University of Wisconsin’s Condor project and tools developed in Italy under the European Union’s DataGrid project.”
The Hartford Financial Services Group is doing very well with Condor. See “Grid Computing At Hartford Life” by Tammy J. McInturff in the LOMA newsletter.” Also, see “Analyze This”, by Steve Dwyer, (in Insurance Networking News) mentions the Hartford Financial Services Group’s deployment of Condor. “‘Prior to the grid initiative, we had hit a ceiling with the level of horsepower we could deploy in running calculations through servers and desktops,’ says Severino.” And, in December 2004, see “Mother of Invention ”, by Anthony O’Donnell, (in Insurance & Technology magazine) describes the Hartford Financial Services Group’s deployment and use of Condor. “The [Condor] grid solution ‘is actually more robust,’ [CIO Vittorio Severino] says. ‘The value is, No. 1, it just creates capacity; No. 2, it’s a more stable environment; and then, No. 3, we enjoy the obvious cost savings.’”
This article by Jeff Mausolf is on IBM’s developerWorks web site. “Condor addresses both of these problem areas by providing a single tool that can manage a cluster of dedicated compute nodes and effectively harness otherwise wasted cycles from idle desktop workstations.”
"Using the GRID to improve the computation speed of electrical impedance tomography (EIT) reconstruction algorithms" (PDF) in Physiological Measurement"
Authors Fritschy, Horesh, Holder, and Bayford presents an improvement to image quality generation (using Condor), that make clinical use of the images more practical. “Using the GRID middleware “Condor” and a cluster of 920 nodes, reconstruction of EIT images of the human head with a non-linear algorithm was speeded up by 25-40 times compared to serial processing of each image.”
Support for Redhat 9 and Enterprise Linux 3 has been added to the Stork data placement scheduler. See the Stork home page for release notes and downloads.
Enrique Alba and Antonio J. Nebro discuss research efforts in ECRIM News.
Version 0.1.0 of the Condor “Setup Hawkeye” package is now available. We’ve been getting an increasing stream of requests to make it easier to add Hawkeye features to your existing Condor. To solve this problem, we’ve created the above package which modifies your Condor configuration & installs the same “install module” script as is used in Hawkeye (except that here it’s named condor_install_module instead of hawkeye_install_module). You can then download & install the same modules as you can for Hawkeye, and can then use the dynamic attributes in your Machine Ad for match-making purposes. More details are available through the Hawkeye page
The usage license for Condor has changed to the Condor Public License, a very liberal license that permits installation, use, reproduction, display, modification and redistribution of Condor, with or without modification, in source and binary forms. This license has already been applied to the binary downloads for Condor version 6.4.7 for both UNIX and Windows. The Condor Team is busily preparing the source code for public release under the terms of the Condor Public License as well. Stay tuned!
Condor NT Version 6.1.16_preview is now released, and is available on the downloads page. It fixes several bugs, including some significant memory leaks, from the previous Condor NT release. There are a significant number of changes and issues to know about between Condor NT and Condor for UNIX.
The default Linux kernel shipped with Red Hat 6.2 contains a bug which Condor can trigger. This would cause Red Hat 6.2 machines running Condor to hang. All Red Hat 6.2 users should either upgrade their kernel to 2.2.16 or later, or upgrade to Condor Version 6.1.15, which will be out early the week of 8/21/00.
Condor version 6.1.11 contains bugs in the way we handle certain system calls that are used by Fortran. This is a problem particularly on Solaris and IRIX, though other platforms might have problems as well. We will release a new version shortly that corrects these problems. If you are only using vanilla jobs, this should not effect you. If you do not relink your jobs with condor_compile using the 6.1.11 libraries, you will also not have this problem (though you also won’t be able to use many of the new features in 6.1.11).
Condor version 6.1.10 contains a minor bug in the condor_status program. For some sites, this causes condor_status to have a segmentation fault, be killed by SIGSEGV, and to drop a core file when trying to display the totals at the end of condor_status. If you are upgrading to 6.1.10 from a previous version, please save your old condor_status binary and install that into your Condor bin directory once you have completed your upgrade. Version 6.1.11 will fix this bug.
Condor 6.1.9 contains a bug in the checkpointing code for “standard” jobs. Once a job checkpoints for the first time, it will no longer be able to checkpoint again. If you are using the standard universe (relinking your jobs with condor_compile), you are going to have problems with 6.1.9. A new version will be released very soon to fix this bug.
The initial version of Condor NT is now available on the downloads page.
The Condor Version 6.0 and Version 6.1 manuals, in several formats, are available for download from the Madison, WI, USA or Bologna, Italy sites.
Condor Downloads can now be downloaded from Bologna, Italy (courtesy of Istituto Nazionale di Fisica Nucleare Sezione di Bologna).
The UW Madison CS Condor pool has been configured to check for a MemoryRequirements parameter in all Condor jobs. This parameter specifies, in megabytes, how much physical memory your job needs to run efficiently. If this parameter is not specified, Condor will assume a default value of 128 MB. Condor will only run jobs on machines with enough available physical memory to satisfy the jobs’ memory requirements. To specify this parameter, please add the following to your job description file(s): <pre> +MemoryRequirements = 90 </pre> replacing 90 with the actual memory requirements of your job in megabytes. We encourage you to continue to specify your job’s virtual memory requirements with the image_size command in your job description file.
A new paper discussing taking currently running processes and submitting them into Condor.
The CondorView Client contrib module automatically generates HTML pages that publish usage statistics about your pool to the World Wide Web. Quickly and easily view how much Condor is being used, how many cycles are being delivered, and who is using them. View utilization by machine platform or by user. Interact with a Java applet to customize the visualization, or to zoom-in to a specific time frame.
The Condor Checkpoint server, PVM support, and CondorView server contrib modules are now online! Condor Version 6.1.1 is also available, again, for use if you know what your doing only!
These optional modules enable you to install new parts of Condor without switching your entire pool to the development series. If you want a certain new feature (like support for SMP machines), we highly recommend you just install the contrib module for that feature, instead of upgrading your whole pool to the development series.
With the release of Condor Version 6.0.1, we have also introduced a new, hopefully easy to follow version numbering scheme for future versions of Condor. There will now be both a stable release series and a development release series available at any given time (much like the Linux kernel).
It contains many bug patches, ports to new platforms including IRIX 6.2 and HPUX 10.20, updated man pages and installation docs, and handy extras (such as the condor_compile command).