diff --git a/.github/workflows/deploy-preview.yml b/.github/workflows/deploy-preview.yml new file mode 100644 index 00000000..6ccba677 --- /dev/null +++ b/.github/workflows/deploy-preview.yml @@ -0,0 +1,27 @@ + +name: Dispatch Preview Update +on: + push: + branches: [development] + +jobs: + dispatch: + runs-on: ubuntu-latest + steps: + - name: Setup SSH Keys and known_hosts + uses: webfactory/ssh-agent@v0.5.3 + with: + ssh-private-key: ${{ secrets.DEPLOY_KEY }} + + + - name: Pull new posts + run: | + git clone --recursive git@github.com:CHTC/article-preview.git + cd article-preview + git config user.name "GitHub Actions" + git config user.email "actions@github.com" + git submodule update --remote + git add _posts + git remote -v + git commit -m "Article Submodule Updated" + git push git@github.com:CHTC/article-preview.git diff --git a/2022-06-30-Opotowsky.md b/2022-06-30-Opotowsky.md new file mode 100644 index 00000000..8d8ca394 --- /dev/null +++ b/2022-06-30-Opotowsky.md @@ -0,0 +1,65 @@ +--- +title: "Expediting Nuclear Forensics and Security Using High Through Computing" + +author: Hannah Cheren + +publish_on: + - htcondor + - path + - chtc + +type: user + +canonical_url: https://osg-htc.org/spotlights/Opotowsky.html + +image: + path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Opotowsky-card.jpeg" + alt: Photo by Dan Myers on Unsplash + +description: Arrielle C. Opotowsky, a 2021 Ph.D. graduate from the University of Wisconsin-Madison's Department of Engineering Physics, describes how she utilized high throughput computing to expedite nuclear forensics investigations. +excerpt: Arrielle C. Opotowsky, a 2021 Ph.D. graduate from the University of Wisconsin-Madison's Department of Engineering Physics, describes how she utilized high throughput computing to expedite nuclear forensics investigations. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Opotowsky-card.jpeg" +card_alt: Arrielle C. Opotowsky, a 2021 Ph.D. graduate from the University of Wisconsin-Madison's Department of Engineering Physics, describes how she utilized high throughput computing to expedite nuclear forensics investigations. + +banner_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Opotowsky-card.jpeg" +banner_alt: Arrielle C. Opotowsky, a 2021 Ph.D. graduate from the University of Wisconsin-Madison's Department of Engineering Physics, describes how she utilized high throughput computing to expedite nuclear forensics investigations. +--- + ***Arrielle C. Opotowsky, a 2021 Ph.D. graduate from the University of Wisconsin-Madison's Department of Engineering Physics, describes how she utilized high throughput computing to expedite nuclear forensics investigations.*** + + + + + + "Each year, there can be from two to twenty incidents related to the malicious use of nuclear materials,” including theft, sabotage, illegal transfer, and even terrorism, [Arrielle C. Opotowsky](http://scifun.org/Thesis_Awards/opotowsky.html) direly warned. Opotowsky, a 2021 Ph.D. graduate from the University of Wisconsin-Madison's Department of Engineering Physics, immediately grabbed the audience’s attention at [HTCondor Week 2022](https://agenda.hep.wisc.edu/event/1733/timetable/?view=standard). + + Opotowsky's work focuses on nuclear forensics. Preventing nuclear terrorism is the primary concern of nuclear security, and nuclear forensics is “the *response* side to a nuclear event occurring,” Opotowsky explains. Typically in a nuclear forensics investigation, specific measurements need to be processed; unfortunately, some of these measurements can take months to process. Opotowsky calls this “slow measure” general mass spectrometry. Although this measurement can help point investigators in the right direction, they wouldn’t be able to do until long after the incident has occurred. + + In trying to learn how she could expedite a nuclear forensics investigation, Opotowsky wanted to see if Gamma Spectroscopy, a “fast measurement,” could be the solution. This measure can potentially point investigators in the right direction, but in days rather than months. + + To test whether this “fast measurement” could expedite a nuclear forensics investigation compared to a “slow measurement,” Opotowsky created a workflow and compared the two measurements. + + While Opotowsky was a graduate student working on this problem, the workflow she created was running on her personal computer and suddenly stopped working. In a panic, she went to her advisor, [Paul Wilson](https://directory.engr.wisc.edu/ep/faculty/wilson_paul), for help, and he pointed her to the UW-Madison Center for High Throughput Computing (CHTC). + + CHTC Research Computing Facilitators came to her aid, and “the support was phenomenal – there was a one-on-one introduction and a tutorial and incredible help via emails and office hours…I had a ton of help along the way.” + + She needed capacity from the CHTC because she used a machine-learning workflow and 10s of case variations. She had a relatively large training database because she used several algorithms and hyperparameter variations and wanted to predict several labels. The sheer magnitude of these training databases is the leading reason why Opotowsky needed the services of the CHTC. + + She used two computation categories, the second of which required a specific capability offered by the CHTC - the ability to scale out a large problem into an ensemble of smaller jobs running in parallel. With 500,000 total entries in the databases and a limit of 10,000 jobs per case submission, Opotowsky split the computations into fifty calculations per job. This method resulted in lower memory needs per job, each taking only a few minutes to run. + + “I don’t think my research would have been possible” without HTC, Opotowsky noted as she reflected on how the CHTC impacted her research. “The main component of my research driving my need [for the CHTC] was the size of my database. It would’ve had to be smaller, have fewer parameter variations, and that ‘fast’ measurement was like a ‘real-world’ scenario; I wouldn’t have been able to have that.” + + Little did Opotowsky know that her experience using HTC would also benefit her professionally. Having HTC experience has helped Opotowsky in job interviews and securing her current position in nuclear security. As a nuclear methods software engineer, “knowledge of designing code and interacting with job submission systems is something I use all the time,” she comments, “[learning HTC] was a wonderful experience to gain” from both a researcher and professional point of view. + + +... + + *Watch a video recording of Arrielle C. Opotowsky’s talk at HTCondor Week 2022, and browse her [slides](https://agenda.hep.wisc.edu/event/1733/contributions/25511/attachments/8299/9577/HTCondorWeek_AOpotowsky.pdf).* + + diff --git a/2022-07-06-Wilcots.md b/2022-07-06-Wilcots.md new file mode 100644 index 00000000..19e0fe36 --- /dev/null +++ b/2022-07-06-Wilcots.md @@ -0,0 +1,124 @@ +--- +title: "Keynote Address: The Future of Radio Astronomy Using High Throughput Computing" + +author: Hannah Cheren + +publish_on: + - htcondor + - path + - chtc + +type: user + +canonical_url: https://htcondor.org/featured-users/2022-07-06-Wilcots.html + +image: + path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Wilcots-card.png" + alt: Image of the black hole in the center of our Milky Way galaxy. + +description: Eric Wilcots, UW-Madison dean of the College of Letters & Science and the Mary C. Jacoby Professor of Astronomy, dazzles the HTCondor Week 2022 audience. +excerpt: Eric Wilcots, UW-Madison dean of the College of Letters & Science and the Mary C. Jacoby Professor of Astronomy, dazzles the HTCondor Week 2022 audience. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Wilcots-card.png" +card_alt: Eric Wilcots, UW-Madison dean of the College of Letters & Science and the Mary C. Jacoby Professor of Astronomy, dazzles the HTCondor Week 2022 audience. + +banner_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Wilcots-card.png" +banner_alt: Eric Wilcots, UW-Madison dean of the College of Letters & Science and the Mary C. Jacoby Professor of Astronomy, dazzles the HTCondor Week 2022 audience. +--- + ***Eric Wilcots, UW-Madison dean of the College of Letters & Science and the Mary C. Jacoby Professor of Astronomy, dazzles the HTCondor Week 2022 audience.*** + + + + + + “My job here is to…inspire you all with a sense of the discoveries to come that will need to be enabled by,” high throughput computing (HTC), Eric Wilcots opened his keynote for HTCondor Week 2022. Wilcots is the UW-Madison dean of the College of Letters & Science and the Mary C. Jacoby Professor of Astronomy. + + Wilcots points out that the black hole image (shown above) is a remarkable feat in the world of astronomy. “Only the third such black hole imaged in this way by the Event Horizon Telescope,” and it was made possible with the help of the HTCondor Software Suite (HTCSS). + + **Beginning to build the future** + + Wilcots described how in the 1940s, a group of universities recognized that no single university could build a radio telescope necessary to advance science. To access these kinds of telescopes, the universities would need to have the national government involved, as it was the only one with this capability at that time. In 1946, these universities created Associated Universities Incorporated (AUI), which eventually became the management agency for the National Radio Astronomy Observatory (NRAO). + + Advances in radio astronomy rely on current technology available to experts in this field. Wilcots explained that “the science demands more sensitivity, more resolution, and the ability to map large chunks of the sky simultaneously.” New and emerging technologies must continue pushing forward to discover the next big thing in radio astronomy. + + This next generation of science requires more sensitive technology with higher spectra resolution than the Karl G. Jansky Very Large Array (JVLA) can provide. It also requires sensitivity in a particular chunk of the spectrum that neither the JVLA nor Atacama Large Millimeter/submillimeter Array (ALMA) can achieve. Wilcots described just what piece of technology astronomers and engineers need to create to reach this level of sensitivity. “We’re looking to build the Next Generation Very Large Array (ngVLA)...an instrument that will cover a huge chunk of spectrum from 1 GHz to 116 GHz.” + + **The fundamentals of the ngVLA** + + “The unique and wonderful thing about interferometry, or the basis of radio astronomy,” Wilcots discussed, “is the ability to have many individual detectors or dishes to form a telescope.” Each dish collects signals, creating an image or spectrum of the sky when combined. Because of this capability, engineers working on these detectors can begin to collect signals right away, and as more dishes get added, the telescope grows larger and larger. + + Many individual detectors also mean lots of flexibility in the telescope arrays built, Wilcots explained. Here, the idea is to do several different arrays to make up one telescope. A particular scientific case drives each of these arrays: + - Main Array: a dish that you can control and point accurately but is also robust; it’ll be the workhorse of the ngVLA, simultaneously capable of high sensitivity and high-resolution observations. + - Short Baseline Array: dishes that are very close together, which allows you to have a large field of view of the sky. + - Long Baseline Array: spread out across the continental United States. The idea here is the longer the baseline, the higher the resolution. Dishes that are well separated allow the user to get spectacular spatial resolution of the sky. For example, the Event Horizon Telescope that took the image of the black hole is a telescope that spans the globe, which is the longest baseline we can get without putting it into orbit. + + + + A consensus study report called Pathways to Discovery in Astronomy and Astrophysics for the 2020s (Astro2020) identified the ngVLA as a high priority. The construction of this telescope should begin this decade and be completed by the middle of the 2020s. + + **Future of radio astronomy: planet formation** + + An area of research that radio astronomers are interested in examining in the future is imaging the formation of planets, Wilcot notes. Right now, astronomers can detect a planet’s presence and deduce specific characteristics, but being able to detect a planet directly is the next huge priority. + + + + One place astronomers might be able to do this with something like the ngVLA is in the early phases of planet formation within a planetary system. The thermal emissions from this process are bright enough to be detected by a telescope like the ngVLA. So the idea is to use this telescope to map an image of nearby planetary systems and begin to image the early stages of planet formation directly. A catalog of these planets forming will allow astronomers to understand what happens when planetary systems, like our own, form. + + **Future of radio astronomy: molecular systems** + + Wilcots explains that radio astronomers have discovered the spectral signature of innumerable molecules within the past fifty years. The ngVLA is being designed to probe, detect, catalog, and understand the origin of complex molecules and what they might tell us about star and planet formation. Wilcots comments in his talk that “this type of work is spawning a new type of science…a remarkable new discipline of astrobiology is emerging from our ability to identify and trace complex organic molecules.” + + **Future of radio astronomy: galaxy completion** + + Next, Wilcots discusses that radio astronomers want to understand how stars form in the first place and the processes that drive the collapse of clouds of gas into regions of star formations. + + + + The gas in a galaxy tends to extend well beyond the visible part of the galaxy, and this enormous gas reservoir is how the galaxy can make stars. + + Astronomers like Wilcots want to know where the gas is, what drives that process of converting the gas into stars, what role the environment might play, and finally, what makes a galaxy stop creating stars. + + ngVLA will be able to answer these questions as it combines the sensitivity and spatial resolution needed to take images of gas clouds in nearby galaxies while also capturing the full extent of that gas. + + **Future of radio astronomy: black holes** + + Wilcots’ look into the future of radio astronomy finishes with the idea and understanding of black holes. + + Multi-messenger astrophysics helps experts recognize that information about the universe is not simply electromagnetic, as it is known best; there is more than one way astronomers can look at the universe. + + More recently, astronomers have been looking at gravitational waves. In particular, they’ve been looking at how they can find a way to detect the gravitational waves produced by two black holes orbiting around one another to determine each black hole’s mass and learn something about them. As the recent EHT images show, we need radio telescopes' high resolution and sensitivity to understand the nature of black holes fully. + + **A look toward the future** + + The next step is for the NRAO to create a prototype of the dishes they want to install for the telescope. Then, it’s just a question of whether or not they can build and install enough dishes to deliver this instrument to its full capacity. Wilcots elaborates, “we hope to transition to full scientific operations by the middle of next decade (the 2030s).” + + The distinguished administrator expressed that “something that’s haunted radio astronomy for a while is that to do the imaging, you have to ‘be in the club,’ ” meaning that not just anyone can access the science coming out of these telescopes. The goal of the NRAO moving forward is to create science-ready data products so that this information can be more widely available to anyone, not just those with intimate knowledge of the subject. + + This effort to make this science more accessible has been part of a budding collaboration between UW-Madison, the NRAO, and a consortium of Historically Black Colleges and Universities and other Minority Serving Institutions in what is called Project RADIAL. + + “The idea behind RADIAL is to broaden the community; not just of individuals engaged in radio astronomy, but also of individuals engaged in the computing that goes into doing the great kind of science we have,” Wilcots explains. + + On the UW-Madison campus in the Summer of 2022, half a dozen undergraduate students from the RADIAL consortium will be on campus doing summer research. The goal is to broaden awareness and increase the participation of communities not typically involved in these discussions in the kind of research in the radial astronomy field. + + “We laid the groundwork for a partnership with a number of these institutions, and that partnership is alive and well,” Wilcots remarks, “so stay tuned for more of that, and we will be advancing that in the upcoming years.” + +... + + *Watch a video recording of Eric Wilcots’ talk at HTCondor Week 2022.* + + diff --git a/2022-07-18-EOL-OSG.md b/2022-07-18-EOL-OSG.md new file mode 100644 index 00000000..5d1f00a6 --- /dev/null +++ b/2022-07-18-EOL-OSG.md @@ -0,0 +1,47 @@ +--- +title: "Retirements and New Beginnings: The Transition to Tokens" + +author: Hannah Cheren + +publish_on: + - osg + - path + - htcondor + +type: news + +canonical_url: https://osg-htc.org/spotlights/EOL-OSG.html + +image: + path: + alt: + +description: May 1, 2022, officially marked the retirement of OSG 3.5, GridFTP, and GSI dependencies. OSG 3.6, up and running since February of 2021, is prepared for usage and took its place, relying on WebDAV and bearer tokens. +excerpt: May 1, 2022, officially marked the retirement of OSG 3.5, GridFTP, and GSI dependencies. OSG 3.6, up and running since February of 2021, is prepared for usage and took its place, relying on WebDAV and bearer tokens. + +card_src: +card_alt: + +banner_src: +banner_alt: +--- + + ***May 1, 2022, officially marked the retirement of OSG 3.5, GridFTP, and GSI dependencies. OSG 3.6, up and running since February of 2021, is prepared for usage and took its place, relying on WebDAV and bearer tokens.*** + + In December of 2019, OSG announced its plan to transition towards bearer tokens and WebDAV-based file transfer, which would ultimately culminate in the retirement of OSG 3.5. Nearly two and a half years later, after significant development and work with collaborators on the transition, OSG marked the end of support for OSG 3.5. + + OSG celebrated the successful and long-planned OSG 3.5 retirement and transition to OSG 3.6, the first version of the OSG Software Stack without any Globus dependencies. Instead, it relies on WebDAV (an extension to HTTP/S allowing for distributed authoring and versioning of files) and bearer tokens. + + Jeff Dost, OSG Coordinator of Operations, reports that the transition “was a big success!” Ultimately, OSG made the May 1st deadline without having to backtrack and put out new fires. Dost notes, however, that “the transition was one of the most difficult ones I can remember in the ten plus years of working with OSG, due to all the coordination needed.” + + Looking back, for nearly fifteen years, communications in OSG were secured with X.509 certificates and proxies via Globus Security Infrastructure (GSI) as an Authentication and Authorization Infrastructure (AAI). + + Then, in June of 2017, Globus announced the end of support for its open-source Toolkit that the OSG depended on. In October, they established the Grid Community Forum (GCF) to continue supporting the Toolkit to ensure that research could continue uninterrupted. + + While the OSG continued contributing to the GCT, the long-term goal was to transition the research community from these approaches to token-based pilot job authentication instead of X.509 proxy authentication. + + A more detailed document of the OSG-LHC GridFTP and GSI migration plans can be found in [this document](https://docs.google.com/document/d/1DAFeAaUmHHVcJGZMTIDUtLs9koCruQRDY1sJq1opeNs/edit#heading=h.6f8tit251wrg). Please visit the GridFTP and GSI Migration [FAQ page](https://osg-htc.org/technology/policy/gridftp-gsi-migration/index.html) if you have any questions. For more information and news about OSG 3.6, please visit the [OSG 3.6 News](https://osg-htc.org/docs/release/osg-36/) release documentation page. + +... + + *If you have any questions about the retirement of OSG 3.5 or the implementation of OSG 3.6, please contact help@opensciencegrid.org.* diff --git a/2022-07-18-Messick.md b/2022-07-18-Messick.md new file mode 100644 index 00000000..81219e5d --- /dev/null +++ b/2022-07-18-Messick.md @@ -0,0 +1,65 @@ +--- +title: "LIGO's Search for Gravitational Waves Signals Using HTCondor" + +author: Hannah Cheren + +publish_on: + - htcondor + - path + - chtc + +type: user + +canonical_url: https://htcondor.org/featured-users/2022-07-06-Messick.html + +image: + path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Messick-card.png" + alt: Image of two black holes from Cody Messick’s presentation slides. + +description: Cody Messick, a Postdoc at the Massachusetts Institute of Technology (MIT) working for the LIGO lab, describes LIGO's use of HTCondor to search for new gravitational wave sources. +excerpt: Cody Messick, a Postdoc at the Massachusetts Institute of Technology (MIT) working for the LIGO lab, describes LIGO's use of HTCondor to search for new gravitational wave sources. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Messick-card.png" +card_alt: Image of two black holes from Cody Messick’s presentation slides. + +banner_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Messick-card.png" +banner_alt: Image of two black holes from Cody Messick’s presentation slides. +--- + ***Cody Messick, a Postdoc at the Massachusetts Institute of Technology (MIT) working for the LIGO lab, describes LIGO's use of HTCondor to search for new gravitational wave sources.*** + + + + High-throughput computing (HTC) is critical to astronomy, from black hole research to radial astronomy and beyond. At the [2022 HTCondor Week](https://agenda.hep.wisc.edu/event/1733/timetable/?view=standard), another area of astronomy was put in the spotlight by [Cody Messick](https://space.mit.edu/people/messick-cody/), a researcher working for the [LIGO](https://space.mit.edu/instrumentation/ligo/) lab and a Postdoc at the Massachusetts Institute of Technology (MIT). His work focuses on a gravitational-wave analysis that he’s been running with the help of HTCondor to search for new gravitational wave signals. + + Starting with general relativity and why it’s crucial to his work, Messick explains that “it tells us two things; first, space and time are not separate entities but are instead part of a four-dimensional object called space-time. Second, space-time is warped by mass and energy, and it’s these changes to the geometry of space-time that we experience as gravity.” + + Messick notes that general relativity is important to his work because it predicts the existence of gravitational waves. These waves are tiny ripples in the curvature of space-time that travel at the speed of light and stretch and compress space. Accelerating non-spherically symmetric masses generate these waves. + + Generating ripples in the curvature of space-time large enough to be detectable using modern ground-based gravitational-wave observatories takes an enormous amount of energy; the observations made thus far have come from the mergers of compact binaries, pairs of extraordinarily dense yet relatively small astronomical objects that spiral into each other at speeds approaching the speed of light. Black holes and neutron stars are examples of these so-called compact objects, both of which are or almost are perfectly spherical. + + Messick and his team first detected two black holes going two-thirds the speed of light right before they collided. “It’s these fantastic amounts of energy in a collision that moves our detectors by less than the radius of a proton, so we need extremely energetic explosions of collisions to detect these things.” + + Messick looks for specific gravitational waveforms during the data analysis. “We don’t know which ones we’re going to look for or see in advance, so we look for about a million different ones.” They then use match filtering to find the probability that the random noise in the detectors would generate something that looks like a gravitational-wave; the first gravitational-wave observation had less than a 1 in 3.5 billion chance of coming from noise and matched theoretical predictions from general relativity extremely well. + + Messick's work with external collaborators outside the LIGO-Virgo-KAGRA collaboration looks for systems their normal analyses are not sensitive to. Scientists use the parameter kappa to characterize the ability of a nearly spherical object to distort when spinning rapidly or, in simple terms, how squished a sphere will become when spinning quickly. + + LIGO searches are insensitive to any signal with a kappa greater than approximately ten. “There could be [signals] hiding in the data that we can’t see because we’re not looking with the right waveforms,” Messick explains. His analysis has been working on this problem. + + Messick uses HTCondor DAGs to model his workflows, which he modified to make integration with OSG easier. The first job checks the frequency spectrum of the noise. These workflows go into an aggregation of the frequency spectrum, decomposition (labeled by color by type of detector), and finally, the filtering process occurs. + + + +Although Messick’s work is more physics-heavy than computationally driven, he remarks that “HTCondor is extremely useful to us… it can fit the work we’ve been doing very, very naturally.” + +... + + *Watch a video recording of Cody Messick’s talk at HTCondor Week 2022, and browse his [slides](https://agenda.hep.wisc.edu/event/1733/contributions/25501/attachments/8303/9586/How%20LIGO%20Analysis%20is%20using%20HTCondor.pdf).* + + + diff --git a/2022-09-27-DoIt-Article-Summary.md b/2022-09-27-DoIt-Article-Summary.md index 6b09f13e..984389f2 100644 --- a/2022-09-27-DoIt-Article-Summary.md +++ b/2022-09-27-DoIt-Article-Summary.md @@ -1,5 +1,5 @@ --- -title: "Solving for the future: Investment, new coalition levels up research computing infrastructure at UW–Madison" +title: Summary of "Solving for the future; Investment, new coalition levels up research computing infrastructure at UW–Madison" author: Hannah Cheren diff --git a/2022-11-03-ucsd-external-release.md b/2022-11-03-ucsd-external-release.md new file mode 100644 index 00000000..d0ce2a77 --- /dev/null +++ b/2022-11-03-ucsd-external-release.md @@ -0,0 +1,49 @@ +--- +title: PATh Extends Access to Diverse Set of High Throughout Computing Research Programs + +author: Cannon Lock + +publish_on: +- path + +type: news + +canonical_url: "https://path-cc.io/news/2022-11-03-ucsd-external-release" + +image: + path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/ucsd-public-relations.png" + alt: The colors on the chart correspond to the total number of core hours – nearly 884,000 – utilized by researchers at participating universities on PATh Facility hardware located at SDSC. + +description: | + UCSD announces the new PATh Facility and discusses its impact on science. +excerpt: | + UCSD announces the new PATh Facility and discusses its impact on science. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/ucsd-public-relations.png" +card_alt: The colors on the chart correspond to the total number of core hours – nearly 884,000 – utilized by researchers at participating universities on PATh Facility hardware located at SDSC. +--- + +Finding the right road to research results is easier when there is a clear PATh to follow. The Partnership to Advance Throughput Computing ([PATh](https://path-cc.io/))—a partnership between the [OSG Consortium](https://osg-htc.org/) and the University of Wisconsin-Madison’s Center for High Throughput Computing ([CHTC](https://chtc.cs.wisc.edu/)) supported by the National Science Foundation (NSF)—has cleared the way for science and engineering researchers for years with its commitment to advancing distributed high throughput computing (dHTC) technologies and methods. + +HTC involves running a large number of independent computational tasks over long periods of time—from hours and days to week or months. dHTC tools leverage automation and build on distributed computing principles to save researchers with large ensembles incredible amounts of time by harnessing the computing capacity of thousands of computers in a network—a feat that with conventional computing could take years to complete. + +Recently PATh launched the [PATh Facility](https://path-cc.io/facility/index.html), a dHTC service meant to handle HTC workloads in support and advancement of NSF-funded open science. It was announced earlier this year via a [Dear Colleague Letter](https://www.nsf.gov/pubs/2022/nsf22051/nsf22051.jsp) issued by the NSF and identified a diverse set of [eligible research programs](https://www.nsf.gov/pubs/2022/nsf22051/nsf22051.jsp) that range across 14 domain science areas including geoinformatics, computational methods in chemistry, cyberinfrastructure, bioinformatics, astronomy, arctic research and more. Through this 2022-2023 fiscal year pilot project, the NSF awards credits for access to the PATh Facility, and researchers can request computing credits associated with their NSF awards. There are two ways to request credit: 1) within new proposals or 2) with existing awards via an email request for additional credits to participating program officers. + +“It is a remarkable program because it spans almost the entirety of the NSF’s directorates and offices,” said San Diego Supercomputer Center ([SDSC](https://www.sdsc.edu/)) Director Frank Würthwein, who also serves as executive director of the OSG Consortium. + +Access to the PATh Facility offers researchers approximately 35,000 modern cores and up to 44 A100 GPUs. Recently SDSC, located at [UC San Diego](https://ucsd.edu/), added PATh Facility hardware on its [Expanse](https://www.sdsc.edu/services/hpc/expanse/) supercomputer for use by researchers with PATh credits. According to SDSC Deputy Director Shawn Strande: “Within the first two weeks of operations, we saw researchers from 10 different institutions, including one minority serving institution, across nearly every field of science. The beauty of the PATh model of system integration is that researchers have access as soon as the resource is available via OSG. PATh democratizes access by lowering barriers to doing research on advanced computing resources.” + +While the PATh credit ecosystem is still growing, any PATh Facility capacity not used for credit will be available to the Open Science Pool ([OSPool](https://osg-htc.org/services/open_science_pool.html)) to benefit all open science under a Fair-Share allocation policy. “For researchers familiar with the OSPool, running HTC workloads on the PATh Facility should feel like second-nature” said Christina Koch, PATh’s research computing facilitator. + +“Like the OSPool, the PATh Facility is nationally spanning, geographically distributed and ideal for HTC workloads. But while resources on the OSPool belong to a diverse range of campuses and organizations that have generously donated their resources to open science, the allocation of capacity in the PATh Facility is managed by the PATh project itself,” said Koch. + +PATh will eventually reach over six national sites: SDSC at UC San Diego, CHTC at the University of Wisconsin-Madison, the Holland Computing Center at the University of Nebraska-Lincoln, Syracuse University’s Research Computing group, the Texas Advanced Computing Center at the University of Texas at Austin and Florida International University’s AMPATH network in Miami. + +PIs may contact [credit-accounts@path-cc.io](mailto:credit-accounts@path-cc.io) with questions about PATh resources, using HTC, or estimating credit needs. More details also are available on the [PATh credit accounts](https://path-cc.io/services/credit-accounts/) web page. + + \ No newline at end of file diff --git a/2022-11-09-CHTC-pool-record.md b/2022-11-09-CHTC-pool-record.md new file mode 100644 index 00000000..4140f541 --- /dev/null +++ b/2022-11-09-CHTC-pool-record.md @@ -0,0 +1,65 @@ +--- +title: CHTCPool Hits Record Number of Core Hours + +author: Shirley Obih + +publish_on: + - htcondor + - path + - chtc + +type: news + +canonical_url: https://chtc.cs.wisc.edu/CHTC-pool-record.html + +image: + path: https://raw.githubusercontent.com/CHTC/Articles/main/images/Pool-Record-Image.jpg + alt: Pool Record Banner + +description: CHTC smashes record +excerpt: CHTC smashes record + +card_src: https://raw.githubusercontent.com/CHTC/Articles/main/images/Pool-Record-Image.jpg +card_alt: Pool Record Banner + +banner_src: https://raw.githubusercontent.com/CHTC/Articles/main/images/Pool-Record-Image.jpg +banner_alt: Pool Record Banner +--- + +CHTC users recorded the most ever usage in the CHTC Pool on October 18th this year - utilizing +over 700,000 core hours - only to have that record broken again a mere two days later on Oct 20th, +with a total of 710,796 core hours reached. + +The Center for High Throughput (CHTC) users are hard at work smashing records with two almost consecutive record numbers of core hour usage. +October 20th was the highest daily core hour in the CHTC Pool with 710,796 hours utilized, a feat attained +just two days after the October 18th record break of 705,801 core hours. + +What is contributing to these records? One factor likely is UW’s investment in new hardware. +UW-Madison’s research computing hardware recently underwent a [substantial hardware refresh](https://chtc.cs.wisc.edu/DoIt-Article-Summary.html), +adding 207 new servers representing over 40,000 “batch slots” of computing capacity. + +However, additional capacity requires researchers ready and capable to use it. +The efforts of the CHTC facilitation team, led by Christina Koch, contributed to +this readiness. Since September 1, CHTC's Research Computing Facilitators have met +with 70 new users for an introductory consultation, and there have been over 80 +visits to the twice-weekly drop-in office hours hosted by the facilitation team. +Koch notes that "using large-scale computing can require skills and concepts that +are new to most researchers - we are here to help bridge that gap." + +Finally, the hard work of the researchers themselves is another linchpin to these records. +Over 80 users that span many fields of science contributed to this success, including +these users with substantial usage: + +- [Ice Cube Neutrino Observatory](https://icecube.wisc.edu): an observatory operated by University of Madison, designed to observe the cosmos from deep within the South Pole ice. +- [ECE_miguel](https://www.ece.uw.edu/people/miguel-a-ortega-vazquez/): In the Department of Electrical and Computer Engineering, Joshua San Miguel’s group explores new paradigms in computer architecture. +- [MSE_Szlufarska](https://directory.engr.wisc.edu/mse/Faculty/Szlufarska_Izabela/): Isabel Szlufarska’s lab focuses on computational materials science, mechanical behavior at the nanoscale using atomic scale modeling to understand and design new materials. +- [Genetics_Payseur](https://payseur.genetics.wisc.edu): Genetics professor Bret Payseur’s lab uses genetics and genomics to understand mechanisms of evolution. +- [Pharmacy_Jiang](https://apps.pharmacy.wisc.edu/sopdir/jiaoyang_jiang/index.php): Pharmacy professor Jiaoyang Jiang’s interests span the gap between biology and chemistry by focusing on identifying the roles of protein post-translational modifications in regulating human physiological and pathological processes. +- [EngrPhys_Franck](https://www.franck.engr.wisc.edu): Jennifer Franck’s group specializes in the development of new experimental techniques at the micro and nano scales with the goal of providing unprecedented full-field 3D access to real-time imaging and deformation measurements in complex soft matter and cellular systems. +- [BMI_Gitter](https://www.biostat.wisc.edu/~gitter/): In Biostatistics and Computer Sciences, Anthony Gitter’s lab conducts computational biology research that brings together machine learning techniques and problems in biology +- [DairyScience_Dorea](https://andysci.wisc.edu/directory/joao-ricardo-reboucas-dorea/): Joao Dorea’s Animal and Dairy Science group focuses on the development of high-throughput phenotyping technologies. + +Any UW student or researcher who wants to utilize high throughput of computing resources +towards a given problem can harness the capacity of CHTC Pool. + +[Users can sign up here](https://chtc.cs.wisc.edu/uw-research-computing/get-started.html) diff --git a/2022-12-05-htcondor-week-2023.md b/2022-12-05-htcondor-week-2023.md new file mode 100644 index 00000000..dd76259a --- /dev/null +++ b/2022-12-05-htcondor-week-2023.md @@ -0,0 +1,48 @@ +--- +title: "Save the Date! HTCondor Week 2023, June 5-8" + +author: Hannah Cheren + +publish_on: + - htcondor + +type: news + +canonical_url: http://htcondor.org/HTCondorWeek2023 + +image: + path: https://raw.githubusercontent.com/CHTC/Articles/main/images/HTCondor_Banner.jpeg + alt: HTCondor Week 2023 + +description: "Save the Date! HTCondor Week 2023, June 5-8" +excerpt: "Save the Date! HTCondor Week 2023, June 5-8" + +card_src: https://raw.githubusercontent.com/CHTC/Articles/main/images/HTCondor_Banner.jpeg +card_alt: HTCondor Week 2023 + +banner_src: https://raw.githubusercontent.com/CHTC/Articles/main/images/HTCondor_Banner.jpeg +banner_alt: HTCondor Week 2023 +--- + +
Save the Date for HTCondor Week May 23 - 26!
+ + +Hello HTCondor Users and Collaborators! + +We want to invite you to HTCondor Week 2023, our annual HTCondor user conference, from June 5-8, 2023 at the Fluno Center at the Univeristy of Wisconsin-Madison! + +More information about registration coming soon. + +We will have a variety of in-depth tutorials and talks where you can learn more about HTCondor and how other people are using and deploying HTCondor. Best of all, you can establish contacts and learn best practices from people in industry, government, and academia who are using HTCondor to solve hard problems, many of which may be similar to those you are facing. + +And make sure you check out these articles written on presentations from last year's HTCondor Week! +- [Using high throughput computing to investigate the role of neural oscillations in visual working memory](https://path-cc.io/news/2022-07-06-Fulvio/) +- [Using HTC and HPC Applications to Track the Dispersal of Spruce Budworm Moths](https://path-cc.io/news/2022-07-06-Garcia/) +- [Testing GPU/ML Framework Compatibility](https://path-cc.io/news/2022-07-06-Hiemstra/) +- [Expediting Nuclear Forensics and Security Using High Throughput Computing](https://path-cc.io/news/2022-07-06-Opotowsky/) +- [The Future of Radio Astronomy Using High Throughput Computing](https://path-cc.io/news/2022-07-12-Wilcots/) +- [LIGO's Search for Gravitational Waves Signals Using HTCondor](https://path-cc.io/news/2022-07-21-Messick/) + +Hope to see you there, + +\- The Center for High Throughput Computing diff --git a/2022-12-14-CHTC-Facilitation.md b/2022-12-14-CHTC-Facilitation.md new file mode 100644 index 00000000..ccb12697 --- /dev/null +++ b/2022-12-14-CHTC-Facilitation.md @@ -0,0 +1,71 @@ +--- +title: CHTC Facilitation Innovations for Research Computing + +author: Hannah Cheren + +publish_on: +- chtc +- path +- htcondor +- osg + +type: news + +canonical_url: "https://chtc.cs.wisc.edu/chtc-facilitation.html" + +image: +path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Facilitation-cover.jpeg" +alt: Research Computing Facilitator Christina Koch with a researcher. + +description: | + After adding Research Computing Facilitators in 2013-2014, CHTC has expanded its reach to support researchers in all disciplines interested in using large-scale computing to support their research through the shared computing capacity offered by the CHTC. +excerpt: | + After adding Research Computing Facilitators in 2013-2014, CHTC has expanded its reach to support researchers in all disciplines interested in using large-scale computing to support their research through the shared computing capacity offered by the CHTC. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Facilitation-cover.jpeg" +card_alt: Research Computing Facilitator Christina Koch with a researcher. +--- + ***After adding Research Computing Facilitators in 2013-2014, CHTC has expanded its reach to support researchers in all disciplines interested in using large-scale computing to support their research through the shared computing capacity offered by the CHTC.*** + + + + As the core research computing center at the University of Wisconsin-Madison and the leading high throughput computing (HTC) force nationally, the Center for High Throughput Computing (CHTC), formed in 2014, has always had one simple goal: to help researchers in all fields use HTC to advance their work. + + Soon after its founding, CHTC learned that computing capacity alone was not enough; there needed to be more communication between researchers who used computing and the computer scientists who wanted to help them. To address this gap, the CHTC needed a new, two-way communication model that better understood and advocated for the needs of researchers and helped them understand how to apply computing to transform their research. In 2013, CHTC hired its first Research Computing Facilitator (RCF), Lauren Michael, to implement this new model and provide staff experience in domain research, research computing, and communication/teaching skills. Since then, the team has expanded to include additional facilitators, which today include Christina Koch, now leading the team, Rachel Lombardi, and a new team member CHTC is actively hiring. + + +## What is an RCF? + An RCF’s job is to understand a new user's research goals and provide computing options that fit their needs. “As a Research Computing Facilitator, we want to facilitate the researcher’s use of computing,” explains Koch. “They can come to us with problems with their research, and we can advise them on different computing possibilities.” + + Computing facilitators know how to work with researchers and understand research enough to guide the customizations researchers need. More importantly, RCFs are passionate about helping people and solving problems. + + In the early days of CHTC, it was a relatively new idea to hire people with communication and problem-solving skills and apply those talents to computational research. Having facilitators with these skills bridge the gap between research computing organizations and researchers was what was unique to CHTC; in fact, the term “Research Computing Facilitator” was coined at UW-Madison. + +## RCF as a part of the CHTC model + Research computing facilitators have become an integral part of the CHTC and are a unique part of the model for this center. Koch elaborates that “...what’s unique at the CHTC is having a dedicated role – that we’re not just ‘user support’ responding to people’s questions, but we’re taking this more proactive, collaborative stance with researchers.” Research Computing Facilitators strengthen the CHTC and allow a more diverse range of computing dimensions to be supported. This support gives these researchers a competitive edge that others may not necessarily have. + + The uniqueness of the RFC role allows for customized solutions for researchers and their projects. They meet with every researcher who [requests an account](https://chtc.cs.wisc.edu/uw-research-computing/form.html) to use [CHTC computing resources](https://chtc.cs.wisc.edu/uw-research-computing/index.html). These individual meetings allow RCFs to have strategic conversations to provide personal recommendations and discuss long-term goals. + + Meetings between the facilitators and researchers also get researchers thinking about what they could do if they could do things faster, at a grander scale, and with less time and effort investment for each project. “We want to understand what their research project is, the goals of that project, and the limitations they’re concerned with to see if using CHTC resources could aid them,” Lombardi explains. “We’re always willing to push the boundaries of our services to try to accommodate to researchers' needs.” The RCFs must know enough about the researchers’ work to talk to the researchers about the dimensions of their computing requirements in terms they understand. + + Although RCFs are integral to CHTC’s model, that doesn’t mean it doesn’t come without challenges. One hurdle is that they are facilitators, which means they’re ultimately not the ones to make choices for the researchers they support. They present solutions given each researcher’s unique circumstances, and it’s up to researchers to decide what to do. Koch explains that“it’s about finding the balance between helping them make those decisions while still having them do the actual work, even if it’s sometimes hard, because they understand that it will pay off in the long run.” + + Supporting research computing across domains is also a significant CHTC facilitation accomplishment. Researchers used to need a programming background to apply computing to their analyses, which meant the physical sciences typically dominated large-scale computational analyses. Over the years, computing has become a lot more accessible. More researchers in the life sciences, social sciences, and humanities, have access to community software tools they can apply to their research problems. “It’s not about a user’s level of technical skill or what kind of science they do,” Koch says. It’s about asking, “are you using computing, and do you need help expanding?” CHTC’s ability to pull in researchers across new disciplines has been rewarding and beneficial. “When new disciplines start using computing to tackle their problems, they can do some new, interesting research to contribute to their fields,” Koch notes. + +## Democratizing Access + CHTC’s success can inspire other campuses to rethink their research computing operations to support their researchers better and innovate. Recognized nationally and internationally as an expert in HTC and facilitation, CHTC’s approach has started to make its way onto other campus computing centers. + + CHTC efforts aim to bring broader access to HTC systems. “CHTC has enabled access to computing to a broad spectrum of researchers on campus,” Lombardi explains, “and we strive to help researchers and organizations implement throughput computing capacity.” CHTC is part of national and international efforts to bring that level of computing to other communities through partnerships with organizations, such as the [Campus Cyberinfrastructure (CC*) NSF program](https://beta.nsf.gov/funding/opportunities/campus-cyberinfrastructure-cc). + + The CC* program supports campuses across the country that wish to contribute computing capacity to the [Open Science Pool (OSPool)](https://osg-htc.org/services/open_science_pool.html). These institutions are awarded a grant, and in turn, they agree to donate resources to the OSPool, a mutually beneficial system to democratize computing and make it more accessible to researchers who might not have access to such capacity otherwise. + + The RCF team meets with researchers weekly from around the world (including Africa, Europe, and Asia). They hold OSG Office Hours twice a week for one-on-one support and provide training at least twice a month for new users and on special topics. + + For other campuses to follow in CHTC’s footsteps, they can start implementing facilitation first, even before a campus has any computing systems. In some cases, such as on smaller campuses, they might not even have or need to have a computing center. Having facilitators is crucial to providing researchers with individualized support for their projects. + + The next step would be for campuses to look at how they currently support their researchers, including examining what they’re currently doing and if there’s anything they’d want to do differently to communicate this ethic of supporting researchers. + + Apart from the impact that research computing facilitators have had on the research community, Koch notes what this job means to her, “[w]orking for a more mission-driven organization where I feel like I’m enabling other people’s research success is so motivating.” Now, almost ten years later, the CHTC has gone from having roughly one hundred research groups using the capacity it provides to having several hundred research groups and thousands of users per year. “Facilitation will continue to advise and support these projects to advance the big picture,” Lombardi notes, “we’ll always be available to researchers who want to talk to someone about how CHTC resources can advance their work!” diff --git a/2022-12-19-Lightning-Talks.md b/2022-12-19-Lightning-Talks.md new file mode 100644 index 00000000..7176f3a7 --- /dev/null +++ b/2022-12-19-Lightning-Talks.md @@ -0,0 +1,172 @@ +--- +title: "Student Lightning Talks from the OSG User School 2022" + +author: Hannah Cheren + +publish_on: + - osg + - path + - chtc + - htcondor + +type: news + +canonical_url: https://osg-htc.org/spotlights/Lightning-Talks.html + +image: + path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Lightning-Talks-card.jpeg" + alt: Staff and attendees from the OSG User School 2022. + +description: The OSG User School student lightning talks showcased their research, inspiring all the event participants. +excerpt: The OSG User School student lightning talks showcased their research, inspiring all the event participants. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Lightning-Talks-card.jpeg" +card_alt: Staff and attendees from the OSG User School 2022. + +banner_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/Lightning-Talks-card.jpeg" +banner_alt: Staff and attendees from the OSG User School 2022. +--- + ***The OSG User School student lightning talks showcased their research, inspiring all the event participants.*** + + + + Each summer, the OSG Consortium offers a [week-long summer school](https://osg-htc.org/user-school-2022/) for researchers who want to learn how to use [high-throughput computing](https://htcondor.org/htc.html) (HTC) methods and services to handle large-scale computing applications at the heart of today’s cutting-edge science. This past summer the school was back in-person on the University of Wisconsin–Madison campus, attended by 57 students and over a dozen staff. + + Participants from Mali and Uganda, Africa, to campuses across the United States learned through lectures, discussions, and hands-on activities how to apply HTC approaches to handle large ensembles of jobs and large datasets in support of their research work. +“It's truly humbling to see how much cool work is being done with computing on @CHTC_UW and @opensciencegrid!!” research facilitator Christina Koch tweeted regarding the School. + + One highlight of the School is the closing participants’ lightning talks, where the researchers present their work and plans to integrate HTC, expanding the scope and goals of their research. +The lightning talks given at this year’s OSG User School illustrate the diversity of students’ research and its expanding scope enabled by the power of HTC and the School. + + *Note: Applications to attend the School typically open in March. Check the [OSG website](https://osg-htc.org/) for this announcement.* + + + + [Devin Bayly](https://sxsw.arizona.edu/person/devin-bayly), a data and visualization consultant at the University of Arizona's Research Technologies department, presented “*OSG for Vulkan StarForge Renders.*” Devin has been working on a multimedia project called Stellarscape, which combines astronomy data with the fine arts. The project aims to pair the human’s journey with a star’s journey from birth to death. + + His goal has been to find a way to support connections with the fine arts, a rarity in the HTC community. After attending the User School, Devin intends to use the techniques he learned to break up his data and entire simulation into tiles and use a low-level graphics API called Vulkan to target and render the data on CPU/GPU capacity. He then intends to combine the tiles into individual frames and assemble them into a video. + + + + Starforge Anvil of Creation: *Grudi'c, Michael Y. et al. “STARFORGE: Toward a comprehensive numerical model of star cluster formation and feedback.” arXiv: Instrumentation and Methods for Astrophysics (2020): n. pag. [https://arxiv.org/abs/2010.11254](https://arxiv.org/abs/2010.11254)* + + + + [Mike Nsubuga](https://miken.netlify.app/), a Bioinformatics Research fellow at the African Center of Excellence in Bioinformatics and Data-Intensive Sciences ([ACE](https://ace.ac.ug/)) within the Infectious Disease Institute ([IDI](https://idi.mak.ac.ug/)) at Makerere University in Uganda, presented “*End-to-End AI data systems for targeted surveillance and management of COVID-19 and future pandemics affecting Uganda.*” + + Nsubuga noted that in the United States, there are two physicians for every 1000 people; in Uganda, there is only one physician per 25,000 people. Research shows that AI, automation, and data science can support overburdened health systems and health workers when deployed responsibly. +Nsubuga and a team of Researchers at ACE are working on creating AI chatbots for automated and personalized symptom assessments in English and Luganda, one of the major languages of Uganda. He's training the AI models using data from the public and healthcare workers to communicate with COVID-19 patients and the general public. + + While at the School, Nsubuga learned how to containerize his data into a Docker image, and from that, he built an Apptainer (formerly Singularity) container image. He then deployed this to the [Open Science Pool](https://osg-htc.org/services/open_science_pool.html) (OSPool) to determine how to mimic the traditional conversation assistant workflow model in the context of COVID-19. The capacity offered by the OSPool significantly reduced the time it takes to train the AI model by eight times. + + + + Jem Guhit, a Physics Ph.D. candidate from the University of Michigan, presented “*Search for Di-Higgs production in the LHC with the ATLAS Experiment in the bbtautau Final State.*” The Higgs boson was discovered in 2012 and is known for the Electroweak Symmetry Breaking (EWSB) phenomenon, which explains how other particles get mass. Since then, the focus of the LHC has been to investigate the properties of the Higgs boson, and one can get more insight into how the EWSB Mechanism works by searching for two Higgs bosons using the ATLAS Detector. The particle detectors capture the resultant particles from proton-proton collisions and use this as data to look for two Higgs bosons. + + DiHiggs searches pose a challenge because the rate at which a particle process occurs for two Higgs bosons is 30x smaller than for a single Higgs boson. Furthermore, the particles the Higgs can decay to have similar particle trajectories to other particles produced in the collisions unrelated to the Higgs boson. Her strategy is to use a machine learning (ML) method powerful enough to handle complex patterns to determine whether the decay products come from a Higgs boson. She plans to use what she’s learned at the User School to show improvements in her machine-learning techniques and optimizations. With these new skills, she has been running jobs on the University of Michigan's [HTCondor](https://htcondor.com/) system utilizing GPU and CPUs to run ML jobs efficiently and plans to use the [OSPool](https://osg-htc.org/services/open_science_pool.html) computing cluster to run complex jobs. + + + + [Peder Engelstad](https://www.nrel.colostate.edu/ra-highlights-meet-peder-engelstad/), a spatial ecologist and research associate in the Natural Resource Ecology Laboratory at Colorado State University (and 2006 University of Wisconsin-Madison alumni), presented a talk on “*Spatial Ecology & Invasive Species.*” Engelstad’s work focuses on the ecological importance of natural spatial patterns of invasive species. + + He uses modeling and mapping techniques to explore the spatial distribution of suitable habitats for invasive species. The models he uses combine locations of species with remotely-sensed data, using ML and spatial libraries in R. Recently. he’s taken on the massive task of creating thousands of suitability maps. To do this sequentially would take over three years, but he anticipates HTC methods can help drastically reduce this timeframe to a matter of days. + + Engelstad said it’s been exciting to see the approaches he can use to tackle this problem using what he’s learned about HTC, including determining how to structure his data and break it into smaller chunks. He notes that the nice thing about using geospatial data is that they are often in a 2-D grid system, making it easy to index them spatially and designate georeferenced tiles to work on. Engelstad says that an additional benefit of incorporating HTC methods will be to free up time to work on other scientific questions. + + + + [Zachary Baldwin](https://zabaldwin.github.io/), a Ph.D. candidate in Nuclear and Particle Physics at Carnegie Mellon University, works for the [GlueX Collaboration](http://www.gluex.org/), a particle physics experiment at the Thomas Jefferson National Lab that searches for and studies exotic hybrid mesons. Baldwin presented a talk on “*Analyzing hadronic systems in the search for exotic hybrid mesons at GlueX.*” + + His thesis looks at data collected from the GlueX experiment to possibly discover forbidden quantum numbers found within subatomic particle systems to determine if they exist within our universe. Baldwin's experiment takes a beam of electrons, speeds them up to high energies, and then collides them with a thin diamond wafer. These electrons then slow down, producing linearly polarized photons. These photons will then collide with a container of liquid hydrogen (protons) within the center of his experiment. Baldwin studies the resulting systems produced within these photon-proton collisions. + + The collision creates billions of particles, leaving Baldwin with many petabytes of data. Baldwin remarks that too much time gets wasted looping through all the data points, and massive processes run out of memory before he can compute results, which is one aspect where HTC comes into play. Through the User School, another major area he's been working on is simulating Monte Carlo particle reactions using [OSPool](https://osg-htc.org/services/open_science_pool.html)'s containers which he pushes into the OSPool using HTCondor to simulate events that he believes would happen in the real world. + + + + Olaitan Awe, a systems analyst in the Information Technology department at the Jackson Laboratory (JAX), presented “*Newborn Screening (NBS) of Inborn Errors of Metabolism (IEM).*” The goal of newborn screening is that, when a baby is born, it detects early what diseases they might have. + + Genomic Newborn Screenings (gNBS) are generally cheap, detect many diseases, and have a quick turnaround time. The gNBS takes a child’s genome and compares it to a reference genome to check for variations. The computing challenge lies in looking for all variations, determining which are pathogenic, and seeing which diseases they align with. + + After attending the User School, Awe intends to tackle this problem by writing [DAGMan](https://htcondor.org/dagman/dagman.html) scripts to implement parent-child relations in a pipeline he created. He then plans to build custom containers to run the pipeline on the [OSPool](https://osg-htc.org/services/open_science_pool.html) and stage big data shared across parent-child processes. The long-term goal is to develop a validated, reproducible gNBS pipeline for routine clinical practice and apply it to African populations. + + + + [Max Bareiss](https://safetyimpact.beam.vt.edu/news/2021Abstracts/BareissAAAM20211.html), a Ph.D. Candidate at the Virginia Tech Center for Injury Biomechanics presented “*Detection of Camera Movement in Virginia Traffic Camera Video on OSG.*” Bareiss used a data set of 1263 traffic cameras in Virginia for his project. His goal was to determine how to document the crash, near-crashes, and normal driving recorded by traffic cameras using his video analysis pipeline. This work would ultimately allow him to detect vehicles and pedestrians and determine their trajectories. + + The three areas he wanted to tackle and obtain help with at the User School were data movement, code movement, and using GPUs for other tasks. For data movement, he used MinIO, a high-performance object storage, so that the execution points could directly copy the videos from Virginia Tech. For code movement, Bareiss used Alpine Linux and multi-stage build, which he learned to implement throughout the week. He learned about using GPUs at the [Center for High Throughput Computing](https://chtc.cs.wisc.edu/) (CHTC) and in the [OSPool](https://osg-htc.org/services/open_science_pool.html). + + Additionally, he learned about [DAGMan](https://htcondor.org/dagman/dagman.html), which he noted was “very exciting” since his pipeline was already a directed acyclic graph (DAG). + + + + [Matthew Dorsey](https://www.linkedin.com/in/matthewadorsey/), a Ph.D. candidate in the Chemical and Biomolecular Engineering Department at North Carolina State University, presented on “*Computational Studies of the Structural Properties of Dipolar Square Colloids.*” + + Dorsey is studying a colloidal particle developed in a research lab at NC State University in the Biomolecular Engineering Department. His research focuses on using computer models to discover what these particles can do. The computer models he has developed explore how different parameters (like the system’s temperature, particle density, and the strength of an applied external field) affect the particle’s self-assembly. + + Dorsey recently discovered how the magnetic dipoles embedded in the squares lead to structures with different material properties. He intends to use the [HTCondor Software Suite](https://htcondor.com/htcondor/overview/) (HTCSS) to investigate the applied external fields that change with respect to time. “The HTCondor system allows me to rapidly investigate how different combinations of many different parameters affect the colloids' self-assembly,” Dorsey says. + + + + [Ananya Bandopadhyay](https://thecollege.syr.edu/people/graduate-students/ananya-bandopadhyay/), a graduate student from the Physics Department at Syracuse University, presented “*Using HTCondor to Study Gravitational Waves from Binary Neutron Star Mergers.*” + + Gravitational waves are created when black holes or neutron stars crash into each other. Analyzing these waves helps us to learn about the objects that created them and their properties. + + Bandopadhyay's project focuses on [LIGO](https://www.ligo.caltech.edu/)'s ability to detect gravitational wave signals coming from binary neutron star mergers involving sub-solar mass component stars, which she determines from a graph which shows the detectability of the signals as a function of the component masses comprising the binary system. + + The fitting factors for the signals would have initially taken her laptop a little less than a year to run. She learned how to use [OSPool](https://osg-htc.org/services/open_science_pool.html) capacity from the School, where it takes her jobs only 2-3 days to run. Other lessons that Bandopadhyay hopes to apply are data organization and management as she scales up the number of jobs. Additionally, she intends to implement [containers](https://htcondor.readthedocs.io/en/latest/users-manual/container-universe-jobs.html) to help collaborate with and build upon the work of researchers in related areas. + + + + [Meng Luo](https://www.researchgate.net/profile/Meng-Luo-8), a Ph.D. student from the Department of Forest and Wildlife Ecology at the University of Wisconsin–Madison, presented “*Harnessing OSG to project the impact of future forest productivity change on land use change.*” Luo is interested in learning how forest productivity increases or decreases over time. + + Luo built a single forest productivity model using three sets of remote sensing data to predict this productivity, coupling it with a global change analysis model to project possible futures. + + Using her computer would take her two years to finish this work. During the User School, Luo learned she could use [Apptainer](https://portal.osg-htc.org/documentation/htc_workloads/using_software/containers-singularity/) to run her model and multiple events simultaneously. She also learned to use the [DAGMan workflow](https://htcondor.readthedocs.io/en/latest/users-manual/dagman-workflows.html) to organize the process better. With all this knowledge, she ran a scenario, which used to take a week to complete but only took a couple of hours with the help of [OSPool](https://osg-htc.org/services/open_science_pool.html) capacity. + + Tinghua Chen from Wichita State University presented a talk on “*Applying HTC to Higgs Boson Production Simulations.*” Ten years ago, the [ATLAS](https://atlas.cern/) and [CMS](https://cms.cern/) experiments at [CERN](https://home.web.cern.ch/) announced the discovery of the Higgs boson. CERN is a research center that operates the world's largest particle physics laboratory. The ATLAS and CMS experiments are general-purpose detectors at the Large Hadron Collider (LHC) that both study the Higgs boson. + + For his work, Chen uses a Monte Carlo event generator, Herwig 7, to simulate the production of the Higgs boson in vector boson fusion (VBF). He uses the event generator to predict hadronic cross sections, which could be useful for the experimentalist to study the Standard Model Higgs boson. Based on the central limit theorem, the more events Chen can generate, the more accurate the prediction. + + Chen can run ten thousand events on his laptop, but the predictions could be more accurate. Ideally, he'd like to run five billion events for more precision. Running all these events would be impossible on his laptop; his solution is to run the event generators using the HTC services provided by the OSG consortium. + + Using a workflow he built, he can set up the event generator using parallel integration steps and event generation. He can then use the Herwig 7 event generator to build, integrate, and run the events. + +... + +Thank you to all the researchers who presented their work in the Student Lightning Talks portion of the OSG User School 2022! diff --git a/2022-12-19-ML-Demo.md b/2022-12-19-ML-Demo.md new file mode 100644 index 00000000..5a3f04ee --- /dev/null +++ b/2022-12-19-ML-Demo.md @@ -0,0 +1,107 @@ +--- +title: "CHTC Hosts Machine Learning Demo and Q+A session" + +author: Shirley Obih + +publish_on: + - chtc + +type: user + +canonical_url: https://chtc.cs.wisc.edu/mldemo.html + +image: + path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/firstmldemoimage.png" + alt: A broad lens image of some students present at the demo. + +description: Over 60 students and researchers attended the Center for High Throughput Computing (CHTC) machine learning and GPU demonstration on November 16th. +excerpt: Eric Wilcots, UW-Madison dean of the College of Letters & Science and the Mary C. Jacoby Professor of Astronomy, dazzles the HTCondor Week 2022 audience. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/firstmldemoimage.png" +card_alt: Koch and Gitter presenting at the demo + +banner_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/ML_1.jpeg" +banner_alt: Koch and Gitter presenting at the demo +--- +***Over 60 students and researchers attended the Center for High Throughput Computing (CHTC) machine learning and GPU demonstration on November 16th. UW Madison Associate Professor of Biostatistics and Medical Informatics Anthony Gitter and CHTC Lead Research Computing Facilitator Christina Koch led the demonstration and fielded many questions from the engaged audience.*** + + + +[CHTC services](https://chtc.cs.wisc.edu/uw-research-computing/) include a free large scale computing systems solution for campus researchers who have encountered computing issues and outgrown their resources, often a laptop, Koch began. One of the services CHTC provides is the [GPU Lab](https://chtc.cs.wisc.edu/uw-research-computing/gpu-lab.html), a resource within the HTC system of CHTC. + +The GPU Lab supports up to dozens of concurrent jobs per user, a variety of GPU types including 40GB and 80GB A100s, runtimes from a few hours up to seven days, significant RAM needs, and space for large data sets. + +Researchers are not waiting to take advantage of these CHTC GPU resources. Over the past two months, 52 researchers ran over 17,000 jobs on GPU hardware. Additionally, the UW-Madison [IceCube project](https://icecube.wisc.edu) alone ran over 70K jobs. + +Even more capacity is available. The recent [$4.3 million investment from the Wisconsin Alumni Research Foundation (WARF) in UW-Madison’s research computing hardware](https://chtc.cs.wisc.edu/DoIt-Article-Summary.html) is a significant contributor to this abundance of resources, Gitter noted. + +There are two main ways to know what GPUs are available and the number of GPUs users may request per job: +The first is through the CHTC website - which offers up-to-date information. To access this information, go to the [CHTC website](https://chtc.cs.wisc.edu) and enter ‘gpu’ in the search bar. The first result will be the [‘Jobs that Use GPU Overview’](https://chtc.cs.wisc.edu/uw-research-computing/gpu-jobs.html) which is the main guide on using GPUs in CHTC. At the very top of this guide is a table that contains information about the kinds of GPUs, the number of servers, and the number of GPUs per server, which limits how many GPUs can be requested per job. Also listed is the GPU memory, which shows the amount of GPU memory and the attribute you would use in the ‘required_gpu’ statement when submitting a job. + +A second way is to use the ‘condor_status’ command. To use this command, make sure to set a constraint of ‘Gpus > 0’ to prevent printing out information on every single server we have in the system: condor_status -constraint ‘Gpus > 0’. This gives the names of servers in the pool and their availability status - idle or busy. Users may also add an auto format flag attribute ‘-af’ to print out any desired attribute of the machine. For instance, to access the attributes like those listed in the table of the CHTC guide, users must include the GPUs prefix followed by an underscore and then the name of the column to access. + +The GPU Lab, due to its expansive potential, can be used in many scenarios. Koch explained this using real-world examples. Researchers might want to seek the CHTC GPU Lab when: +Running into the time limit of an existing GPU while trying to develop and run a machine learning algorithm. +Working with models that require more memory than what is available with a current GPU in use. +Trying to benchmark the performance of a new machine algorithm and realizing that the computing resources available are time-consuming and not equipped for multitasking. + +While GPU Lab users routinely submit many jobs that need a single GPU without issue, users may need to work collaboratively with the CHTC team on extra testing and configuration when handling larger data sets and models and benchmark precise timing. Koch presented a slide outlining what is easy to more challenging on CHTC GPU resources, stressing that, when in doubt about what is feasible, to contact CHTC: + + + +Work that is done in CHTC is run through a job submission. Koch presented a flowchart demonstration on how this works: + + + + +She demonstrated the three-step process of +1. login and file upload +2. submission to queue, and +3. job-run execution by HTCondor job scheduler. +This process, she displayed, involves writing up a submit file and utilizing command line syntax to be submitted to the queue. Below are some commands that can be used to submit a file: + + + +The next part of the demo was led by Gitter. To demonstrate what commands would be needed for specific kinds of job submissions, he explained what a job submit file should look like, some necessary commands, and the importance of listing out commands sequentially. + + +Gitter also demonstrated how to run jobs using the example GitHub repository with the following steps: +Connecting a personal user account to a submit server in CHTC +Utilizing the ‘ls’ command to inspect the home directory +Cloning the pre existing template repository with runnable GPU examples +Including a “‘condor_submit*insert-file-name*.sub’” command line to define the job the user wants to run +Applying the ‘condor_q’command to monitor the job that has been submitted + +Users are able to choose GPU related submit file options. Gitter demonstrated ways to access the different options that are needed in the HTCondor submit file in order to access the GPUs in CHTC GPU Lab and beyond. These include: +‘Request_gpus’ to enable GPU use +‘+WantGPULab’ to indicate whether or not to use CHTC’s shared use GPUs ++GPUJobLength’ to indicate which job type the user would like to submit +‘Require_gpus’ to request specific GPU attributes or CUDA functionality + +He outlined some other commands for running PyTorch jobs and for exploring available GPUs. All commands from the demo can be accessed [here](https://docs.google.com/presentation/d/1pdE3oT539iOjxuIRvGeUjQ_GcaiD00r4iCOdp65PPME/edit#slide=id.p). + +The event concluded with a Q&A session for audience members. Some of these questions prompted a discussion on the availability of default repositories and tools that are able to track the resources a job is using. In addition to interactive monitoring, HTCondor has a log file that provides information about when a job was started, a summary of what was requested – disk, memory, GPUs and CPUs as well as what was allocated and estimated to be used. + +Currently, there is a template GitHub repository that can be cloned and used as a starting point. These PyTorch and TensorFlow examples can be useful to you as a starting point. However, nearly every user is using a slightly different combination of packages for their work. For this reason, users will most likely need to make some manual modifications to either adjust versions, change scripts, attribute different names to your data file, etc. + +These resources will be helpful when getting started: +- [Request an account with CHTC](https://chtc.cs.wisc.edu/uw-research-computing/form.html) +- [Access the event slides (including demo commands)](https://docs.google.com/presentation/d/1pdE3oT539iOjxuIRvGeUjQ_GcaiD00r4iCOdp65PPME/edit#slide=id.p) +- [Access a guide to assist with all your computing needs](https://chtc.cs.wisc.edu/uw-research-computing/guides) +- [Access to GPU templates](https://github.com/CHTC/templates-GPUs) +- [Contact CHTC](https://chtc.cs.wisc.edu/uw-research-computing/get-help.html) for assistance diff --git a/2023-01-20-chtc-demo.md b/2023-01-20-chtc-demo.md new file mode 100644 index 00000000..03c6b639 --- /dev/null +++ b/2023-01-20-chtc-demo.md @@ -0,0 +1,65 @@ +--- +title: CHTC Leads High-Throughput Computing Demonstrations + +author: Shirley Obih + +publish_on: + - htcondor + - path + - chtc + +type: news + +canonical_url: (https://chtc.cs.wisc.edu/chtc-demo.html) + +image: + path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/classroomimage.jpeg" + alt: Christina Koch presenting to Kaiping Chen's class. + +description: Students and researchers acquire high-throughput computing knowhow from CHTC led demonstrations. +excerpt: Students and researchers acquire high-throughput computing knowhow from CHTC led demonstrations. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/classroomimage.jpeg" +card_alt: Christina Koch presenting to Kaiping Chen's class + +banner_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/classimage.png" +banner_alt: Christina Koch presenting to Kaiping Chen's class + +--- + +***UW-Madison Assistant Professor [Kaiping Chen](https://lsc.wisc.edu/facstaff/chen-kaiping/) is taking her [life sciences](https://lsc.wisc.edu) course (LSC660) to the next level by incorporating high throughput computing (HTC) into her class. Data Science for understanding science communication involves learning to use statistical methods (e.g., chi-square, analysis of variance, correlation and regression analysis, nonparametric tests) and computational methods (e.g., automated text analysis, computer vision) – all of which sometimes requires complex, time-consuming computing that surpasses the capacity of the everyday computer.*** + + + +To meet this computing challenge, Chen enlisted the help of [CHTC](https://chtc.cs.wisc.edu) Lead Research Computing Facilitator Christina Koch in November 2022 for a demonstration for her class. Chen wanted students to: +Acquire knowledge about the basic approaches for large scale computing +Understand the different scenarios regarding why they may need to use high throughput computing in research. +Be able to distinguish between independent and sequential tasks. +Be able to submit script jobs onto the campus computer cluster of CHTC +Obtain a basic understanding of the parallel computing implementation in R. + +Koch achieved these goals by presenting the uses of HTC for large scale computing and leading a hands-on demonstration with Kaiping to teach students how to submit and run R programming scripts to perform topic modeling on social media data using HTC. + +This learning, Chen noted, served as a tool to aid students to convert theoretical, class-based knowledge into more practical abilities, including learning how to approach computational tasks that could be useful in future work. Two examples of such complex computational tasks include structure topic models (STMs) and regression models. STM uses unsupervised machine learning to identify keywords and major themes across large corpus that could be interpreted into human-readable formats for data analysis. It is also useful in comparing social media influencer versus non-influencer perspectives on science issues through STM. + +The majority of the students in the class, while new to CHTC resources, found the class to be a good introduction to HTC. Ph.D student Ashley Cate from [LSC](https://lsc.wisc.edu) was a prime example. +“I am still an extreme novice when it comes to understanding all the options CHTC has to offer. However, one thing that Christina Koch made very clear is that you’re not alone in your endeavor of utilizing HTC to meet your research needs, and I feel very confident that the professionals would be able to work me through how CHTC could help me.” Master’s student of Life Sciences Communication Jocelyn Cao reported that “I do think I will be utilizing CHTC in my future work because I am interested in doing work with social media.” + + +Other campus groups have also reached out to Koch to learn about CHTC services for their research. Lindley's research group; a group of undergraduate students, M.S., Ph.D and postdocs candidates involved in nuclear reactor physics, advanced reactor design and integrated energy systems wanted to understand how to harness the power of HPC/HTC in their research. + +[Ben Lindley](https://directory.engr.wisc.edu/ep/Faculty/Lindley_Benjamin/), UW Madison Engineering Physics assistant professor has utilized CHTC in his previous work to build software. Wth the assistance of post-doc Una Baker, Lindley Lindley sought the help of CHTC.. “One of the beauties of the high throughput computing resources is that we can analyze dozens or hundreds of cases in parallel,” Lindley said. These cases represent scenarios where certain design features of nuclear reactors are modified and observed for change. “Without HTC, the scope of research could be very limited. Computers could crash and tasks could take too long to complete.” + + + + + +In-person demonstrations with classrooms and research groups are always available at CHTC to UW-Madison researchers looking to expand computing beyond local resources. Koch noted that “we are always happy to meet with course instructors who are interested in including large scale computing in their courses, to share different ways we can support our goals.” + +Contact CHTC [here](https://chtc.cs.wisc.edu/uw-research-computing/get-help.html). diff --git a/2023-01-20-materials-science.md b/2023-01-20-materials-science.md new file mode 100644 index 00000000..2b2628b1 --- /dev/null +++ b/2023-01-20-materials-science.md @@ -0,0 +1,48 @@ +--- +title: Empowering Computational Materials Science Research using HTC + +author: Hannah Cheren + +publish_on: +- chtc +- path +- htcondor + +type: user + +canonical_url: "https://chtc.cs.wisc.edu/materials-science.html" + +image: +path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/materials-science.jpg" +alt: Computer screen with lines of code. Uploaded by AltumCode on Unsplash. + +description: | + Ajay Annamareddy, a research scientist at the University of Wisconsin-Madison, describes how he utilizes high-throughput computing in computational materials science. +excerpt: | + Ajay Annamareddy, a research scientist at the University of Wisconsin-Madison, describes how he utilizes high-throughput computing in computational materials science. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/materials-science.jpg" +card_alt: Computer screen with lines of code. Uploaded by AltumCode on Unsplash. +--- + ***Ajay Annamareddy, a research scientist at the University of Wisconsin-Madison, describes how he utilizes high-throughput computing in computational materials science.*** + + + + Groundbreaking research is in the works for the [Computational Materials Group (CMG)](https://matmodel.engr.wisc.edu/) at the University of Wisconsin-Madison (UW-Madison). [Ajay Annamareddy](https://matmodel.engr.wisc.edu/members/), a research scientist within CMG, has been a leading user of GPU hours with the [Center for High Throughput Computing](https://chtc.cs.wisc.edu/) (CHTC). He utilizes this capacity to run machine learning (ML) simulations as applied to material science problems that have gained tremendous interest in the past decade. CHTC resources have allowed him to study hugely data-driven problems that are practically impossible to deal with using regular resources. + + Before coming to UW-Madison, Annamareddy received his Ph.D. in Nuclear Engineering from North Carolina State University. He was introduced to modeling and simulation work there, but he started using high-throughput computing (HTC) and CHTC services when he came to UW-Madison to work as a PostDoc with [Prof. Dane Morgan in the Materials Science and Engineering department](https://energy.wisc.edu/about/energy-experts/dane-morgan). He now works for CMG as a Research Scientist, where he’s been racking up GPU hours for over a year. + + Working in the field of computational materials, Annamareddy and his group use computers to determine the properties of materials. So rather than preparing material and measuring it in experiments, they use a computer, which is less expensive and more time efficient. Annamareddy studies metallic glasses. These materials have many valuable properties and applications, but are not easy to make. Instead, he uses computer simulations of these materials to analyze and understand their fundamental properties. + + Annamareddy’s group utilizes HTC and high-performance computing (HPC) for their work, so his project lead asked him to contact CHTC and set up an account. Christina Koch, the lead research computing facilitator, responded. “She helped me set up the account and determine how many resources we needed,” Annamareddy explained. “She was very generous in that whenever I exceeded my limits, she would increase them a bit more!” + + CHTC resources have become critical for Annamareddy’s work. One of the projects involves running ML simulations, which he notes would be “difficult to complete” without the support of CHTC. Annamareddy uses graph neural networks (GNN), a powerful yet slightly inefficient deep learning technique. The upside to using GNN is that as long as there is some physics component in the underlying research problem, this technique can analyze just about anything. “The caveat is you need to provide lots of data for this technique to figure out a solution.” + + Meeting this data challenge, Annamareddy put the input data he generates using high-performance computing (HPC) on the HTC staging location, which gets transferred to a local machine before the ML job starts running. “I use close to twenty gigabytes of data for my simulation, so this would be extremely inefficient to run without staging,” he explains. The CHTC provides Annamareddy with the storage and organization he needs to perform these potentially ground-breaking ML simulations. + + Researchers often study materials in traditional atomistic simulations at different timescales, ranging from picoseconds to microseconds. Annamareddy’s goal with his work is to extend the time scales of these conventional simulations by using ML, which he found is well supported by HTC resources. “We have yet to reach it, but we hope we can use ML to extend the time scale of atomistic simulations by a few orders of magnitude. This would be extremely valuable when modeling systems like glass-forming materials where we should be able to obtain properties, like density and diffusion coefficients, much closer to experiments than currently possible with atomistic simulations,” Annamareddy elaborates. This is something that has never been done before in the field. + + This project can potentially extend the time scales possible for conventional molecular dynamic simulations, allowing researchers in this field to predict how materials will behave over more extended periods of time. “It's ambitious – but I’ve been working on it for more than a year, and we’ve made a lot of progress…I enjoy the challenge immensely and am happy I’m working on this problem!” diff --git a/2023-03-01-Google-HTCondor.md b/2023-03-01-Google-HTCondor.md new file mode 100644 index 00000000..b6aea2ec --- /dev/null +++ b/2023-03-01-Google-HTCondor.md @@ -0,0 +1,55 @@ +--- +title: HTCondor and Google Quantum Computing + +author: Hannah Cheren + +publish_on: +- chtc +- path +- htcondor + +type: user + +canonical_url: "https://chtc.cs.wisc.edu/htcondor-google-qvm.html" + +image: + path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/google-qvm.jpg" + alt: Quantum AI Logo. Image from Quantum AI Product Manager Catherine Vollgraff Heidweiller’s research blog post. + +description: | + Google's launch of a Quantum Virtual Machine emulates the experience and results of programming one of Google's quantum computers, managed by an HTCondor system running in Google Cloud. +excerpt: | + Google's launch of a Quantum Virtual Machine emulates the experience and results of programming one of Google's quantum computers, managed by an HTCondor system running in Google Cloud. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/google-qvm.jpg" +card_alt: Quantum AI Logo. Image from Quantum AI Product Manager Catherine Vollgraff Heidweiller’s research blog post. + +banner_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/google-qvm.jpg" +banner_alt: Quantum AI Logo. Image from Quantum AI Product Manager Catherine Vollgraff Heidweiller’s research blog post. +--- + ***Google's launch of a Quantum Virtual Machine emulates the experience and results of programming one of Google's quantum computers, managed by an HTCondor system running in Google Cloud.*** + + + + The CEO of Google and Alphabet, Sudar Pichai, tweeted out some thrilling news: + + “Excited to launch a Quantum Virtual Machine (QVM) which emulates the experience and results of programming one of our quantum computers. It will make it easier for researchers to prototype new algorithms and help students learn how to program a quantum computer.” – [Tweet](https://twitter.com/sundarpichai/status/1549448858282774528). + + Today’s “classical” computing systems, from laptops to large supercomputers, are built using circuit behavior defined by classical physics. Quantum computer circuity, still in the early phases of development, harnesses the laws of quantum mechanics to solve computing problems in new ways. Quantum computers offer exponential speedups – over 100 million times faster for specific issues – to produce groundbreaking results. However, quantum computing will require scientists and engineers to revisit many classical algorithms and develop new ones tailored to exploit the benefits of quantum processors. Therefore, the QVM is a helpful tool for quantum algorithms research. + + “The QVM is, in essence, a realistic simulation of a grid on our quantum hardware using classical computers,” Tom Downes, a consultant for High-Performance Computing (HPC) at Google Cloud, explains. Simulating a grid of qubits, the basic unit of quantum information, on a quantum processor requires many trajectory simulations of quantum noise. Downes explains, “quantum computers are noisy, so it is important to test and adjust your quantum circuits in realistic conditions so they can perform well and output the data you are looking for in your research problem. To virtualize a processor, the QVM uses the noise data and topology of Google's real hardware.” This grid size determines whether a researcher can use their laptop or require a setup utilizing many classical computers to power the simulation. Essentially, research on the QVM is "proof of concept" research. + + To enable researchers to test their algorithms on a larger grid of qubits, Google utilized the [HTCondor Software Suite](https://htcondor.org) (HTCSS) to organize the capacity of many classical computers to run multiple simulations of a quantum circuit simultaneously. The HTCondor Software Suite enables researchers to easily harness the collective computing power of many classical computers and submit and manage large numbers of computing jobs. Today, HTCSS is used at universities, government labs, and commercial organizations worldwide, including within Google’s own Google Cloud Platform, to power QVM. Downes details, “this ability to test on a 32-qubit grid can extrapolate its performance to a non-simulatable grid more feasible.” + + The new [Google Quantum AI tutorial](https://quantumai.google/qsim/tutorials/multinode) shows users how to use the Cloud HPC Toolkit, which makes it easy for new users to deploy HTCondor pools in Google Cloud. Downes describes that the tutorial “provides the basic elements of an HTCondor pool: a central manager, an access point, and a pool of execute points that scale in size to work through the job queue.” + + The tutorial by Google describes how to: +- Use terraform to deploy an HTCondor cluster in the Google Cloud +- Run a multi-node quantum computing simulation using HTCondor +- Query cluster information and monitor running jobs in HTCondor +- Use terraform to destroy the cluster + + Please visit [this website](https://blog.google/technology/research/our-new-quantum-virtual-machine-will-accelerate-research-and-help-people-learn-quantum-computing/) for more information about the Quantum Virtual Machine and [how researchers can use HTCondor for multinode quantum simulations](https://quantumai.google/qsim/tutorials/multinode). diff --git a/2023-04-10-ospool-computation.md b/2023-04-10-ospool-computation.md new file mode 100644 index 00000000..10fe848b --- /dev/null +++ b/2023-04-10-ospool-computation.md @@ -0,0 +1,60 @@ +--- +title: OSPool As a Tool for Advancing Research in Computational Chemistry + +author: Shirley Obih + +publish_on: +- osg +- path +- htcondor + +type: news + +canonical_url: https://osg-htc.org/spotlights/2023-04-10-ospool-computation.html +image: + path: https://raw.githubusercontent.com/CHTC/Articles/main/images/ospool-comp.jpg + alt: Microscope beside computer by Tima Miroshnichenko from Pexels. + +description: Assistant Professor Eric Jonas uses OSG resources to understand the structure of molecules based on their measurements and derived properties. +excerpt: Assistant Professor Eric Jonas uses OSG resources to understand the structure of molecules based on their measurements and derived properties. + +card_src: https://raw.githubusercontent.com/CHTC/Articles/main/images/ospool-comp.jpg +card_alt: Microscope beside computer by Tima Miroshnichenko from Pexels. + +banner_src: https://raw.githubusercontent.com/CHTC/Articles/main/images/ospool-comp.jpg +banner_alt: Microscope beside computer by Tima Miroshnichenko from Pexels. +--- + +**Assistant Professor Eric Jonas uses OSG resources to understand the structure of molecules based on their measurements and derived properties.** + + + + +Picture this: You have just developed a model that predicts the properties of some molecules and plan to include this model in a section of a research paper. However, just a few days before the paper is to be published on your professional website, you discover an error in the data generation process, which requires you to compute your work again and quickly! +This scenario was the case with Assistant Professor [Eric Jonas](https://jonaslab.uchicago.edu), who works in the Department of Computer Science at the University of Chicago (UChicago). +While this process is normally tedious, he noted how the OSPool helped streamline the steps needed to regenerate results: “The OSPool made it easy to go back and regenerate the data set with about 70 million new molecules in just a matter of days.” + +Although this was a fairly recent incident for Jonas, he is not new to high throughput computing or the OSPool. With usage reaching as far back as his graduate school days, Jonas has utilized resources ranging from cloud computing infrastructures like Amazon Web Services to the National Supercomputing Center for his work with biological signal acquisition, molecular inverse problems, machine learning, and other ways of exploiting scalable computation. + +He soon realized, though, that although these other resources could run large amounts of data in a relatively short time, they required a long, drawn-out sequence of actions to provide results – creating an application, waiting for it to be accepted, and then waiting in line for long periods for a job to run. Faced with this problem in 2021, Jonas found a solution with the [OSG Consortium](https://osg-htc.org) and its OSPool, OSG’s distributed pool of computing resources for running high-throughput jobs. + +In April of 2021, he enlisted the help of [HTCondor](https://htcondor.com) and the OSPool to run pre-exising computations that allow for the generation of training data and the development of new machine learning techniques to determine molecular structures in mixtures, chemical structures in new plant species, and other related queries. + +Jonas’ decision to transition to the OSPool boiled down to three simple reasons: +Less red tape involved in getting started. +Better communication and assistance from staff. +Greater flexibility with running other people’s software to generate data for his specific research, which, in his words, are a much better fit for his specific research which would otherwise have been too computationally bulky to handle alone. + +In terms of challenges with OSPool utilization, Jonas’ only point of concern is the amount of time it takes for code that has been uploaded to reach the OSPool. “It takes between 8 and 12 hours for that code to get to OSG. The time-consuming containerization process means that any bug in code that prevents it from running isn't discovered and resolved as quickly, and takes quite a while, sometimes overnight.” + +He and his research team have since continued to utilize OSPool to generate output and share data with other users. They have even become advocates for the resource: “After we build our models, as a next step, we’re like, let’s run our model on the OSPool to allow the community (which constitutes the entirety of OSPool users) also to generate their datasets. I guess my goal, in a way, is to help OSG grow any way I can, whether that involves sharing my output with others or encouraging people to look into it more.” + +Jonas spoke about how he hopes more people would take advantage of OSPool: +“We’re already working on expanding our use of it at UChicago, but I want even more people to know that OSPool is out there and to know what kind of jobs it's a good fit for because if it fits the kind of work you’re doing, it’s like having a superpower!” + diff --git a/2023-04-18-ASP.md b/2023-04-18-ASP.md new file mode 100644 index 00000000..99d8b89b --- /dev/null +++ b/2023-04-18-ASP.md @@ -0,0 +1,66 @@ +--- +title: Distributed Computing at the African School of Physics 2022 Workshop + +author: Hannah Cheren + +publish_on: +- chtc +- path +- osg +- htcondor + +type: user + +canonical_url: "https://osg-htc.org/spotlights/asp.html" + +image: + path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/asp-banner.jpeg" + alt: Image obtained from the official ASP2022 page on the African School of Physics website. + +description: | + Over 50 students chose to participate in a distributed computing workshop from the 7th biennial African School of Physics (ASP) 2022 at Nelson Mandela University in Gqeberha, South Africa. +excerpt: | + Over 50 students chose to participate in a distributed computing workshop from the 7th biennial African School of Physics (ASP) 2022 at Nelson Mandela University in Gqeberha, South Africa. + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/asp-banner.jpeg" +card_alt: Image obtained from the official ASP2022 page on the African School of Physics website. + +banner_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/asp-banner.jpeg" +banner_alt: Image obtained from the official ASP2022 page on the African School of Physics website. +--- + ***Over 50 students chose to participate in a distributed computing workshop from the 7th biennial African School of Physics (ASP) 2022 at Nelson Mandela University in Gqeberha, South Africa.*** + + + + + +Almost 200 students from 41 countries were selected to participate in the [7th ASP 2022](https://www.africanschoolofphysics.org/asp2022/) at [Nelson Mandela University in Gqeberha, South Africa](https://science.mandela.ac.za/ASP-2022). With the school being shortened to two weeks, a parallel learning system was implemented, where participants could choose lectures to attend to improve their educational growth. [Dr. Horst Severini](https://www.nhn.ou.edu/~hs/) is a Research Scientist and Adjunct Professor in [High Energy Physics](http://www-hep.nhn.ou.edu/) and [Information Technology](http://it.ou.edu/) from the University of Oklahoma (OU) and a co-leader of the high-performance computing workshop. He anticipated maybe 25 students attending on his track, “...we had about that many laptops,” he remarked, “and then we ended up with over 50 students!” + +Severini was first introduced to distributed computing during his postdoc at OU. Then in the spring of 2012, Severini was introduced to [Kétévi Assamagan](https://www.aasciences.africa/fellow/ketevi-assamagan), one of the founders of the ASP. Assamagan met with Severini and invited him and his colleagues to participate, leading to a scramble to create a curriculum for this new lecture series. They were eager to show students how distributed computing could help with their work. + +After a few years of fine-tuning the high throughput classes, Severini has the workshop ironed out. After receiving an introduction to basic commands in Linux, the students started with a basic overview of high-energy physics, why computing is important to high-energy physics, and then some [HTCondor basics](https://htcondor.com/). “The goal, really, is to teach students the basics of HTCondor, and then let them go off and see what they can do with it,” Severini explained. The workshop was so successful that students worked through coffee breaks and even stuck around at the end to obtain [OSG accounts](https://portal.osg-htc.org/application) to continue their work. + +A significant improvement for the 2022 high-performance computing workshop was the move from using [OSG Connect](https://connect.osg-htc.org/) for training sessions to Jupyter Notebooks. The switch to Jupyter Notebooks for training developed during the middle of 2022. “Jupyter allows people to ‘test drive’ submitting jobs on an HTCondor system without needing to create a full [OSPool account](https://portal.osg-htc.org/application),” [OSG](https://osg-htc.org/) [Research Computing Facilitator](https://chtc.cs.wisc.edu/CHTC-Facilitation.html) [Christina Koch](https://wid.wisc.edu/people/christina-koch/) clarified. “Moving forward, we hope people can keep using the Jupyter Notebook interface once they get a full OSPool account so that they can move seamlessly from the training experience to all of the OSPool.” + + + +“[Jupyter Notebooks] worked quite well,” Severini said, noting that the only issue was that a few people lost their home directories overnight. However, these “beginning glitches” didn’t slow participants down whatsoever. “People enjoyed [the workshop] and showed it by not wanting to leave during breaks; they just wanted to keep working!” + +Severini’s main goal for the high-performance computing workshop is to migrate the material into Jupyter Notebooks. “I’ve always been most familiar with shell scripts, so I always do anything I can in there because I know it's repeatable…but I’ll adapt, so we'll work on that for the next one,” he explains. + +Overall, “everything’s been working well, and the students enjoy it; we’ll keep adjusting and going with the times!” + +... + +*More information about [scheduling](https://osg-htc.org/dosar/ASP2022/ASP2022_Schedule/) and [materials](https://osg-htc.org/dosar/ASP2022/ASP2022_Materials/) from the 7th ASP 2022. The 8th ASP 2024 will take place in Morocco, Africa. Check [this site](https://www.africanschoolofphysics.org/) for more information as it comes out.* + +*For more information or questions about the switch to Jupyter Notebooks, please email [chtc@cs.wisc.edu.](mailto:chtc@cs.wisc.edu)* diff --git a/2023-04-18-CHTC-Philosophy.md b/2023-04-18-CHTC-Philosophy.md new file mode 100644 index 00000000..6ec25c35 --- /dev/null +++ b/2023-04-18-CHTC-Philosophy.md @@ -0,0 +1,74 @@ +--- +title: The CHTC Philosophy of High Throughput Computing – A Talk by Greg Thain + +author: Hannah Cheren + +publish_on: +- chtc +- path +- htcondor +- osg + +type: news + +canonical_url: "https://chtc.cs.wisc.edu/chtc-philosophy.html" + +image: +path: "https://raw.githubusercontent.com/CHTC/Articles/main/images/chtc-philosophy-banner.jpg" +alt: Image from Greg Thain’s CHTC Philosophy of High Throughput Computing slideshow. + +description: | + HTCondor Core Developer Greg Thain spoke to UW faculty and researchers about research computing and the missions and goals of the Center for High Throughput Computing (CHTC). +excerpt: | + HTCondor Core Developer Greg Thain spoke to UW faculty and researchers about research computing and the missions and goals of the Center for High Throughput Computing (CHTC). + +card_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/chtc-philosophy-banner.png" +card_alt: Image from Greg Thain’s CHTC Philosophy of High Throughput Computing slideshow. + +banner_src: "https://raw.githubusercontent.com/CHTC/Articles/main/images/chtc-philosophy-banner.png" +banner_alt: Image from Greg Thain’s CHTC Philosophy of High Throughput Computing slideshow. +--- + ***HTCondor Core Developer Greg Thain spoke to UW faculty and researchers about research computing and the missions and goals of the Center for High Throughput Computing (CHTC).*** + + + +[The Center for High Throughput Computing](https://chtc.cs.wisc.edu/) (CHTC) is proud to be home to a breadth of research on campus, with over 300 projects and 20 million core hours used by departments on the University of Wisconsin-Madison campus, ranging from the College of Agriculture and Life Sciences (CALS) to the School of Education, School of Pharmacy, and many more. “The CHTC is known best for being a place to run lots of fast jobs for free, to which we hope to continue democratizing computing across the campus,” Greg Thain began in his talks to UW-Madison researchers and staff on March 9 and 17, organized by UW-Madison Chief Technology Officer Todd Shechter. + +“We like to think of the CHTC like the UW Hospital,” Thain explained, “like the hospital’s main purpose is to train the next generation of health professionals and conduct medical research. In the same way, the CHTC is our research laboratory and is where others can come and conduct their research; we do both research and provide a service.” + +The main asset leveraged by the CHTC is research computing. “Research computing consists of research that happens to use computing and research about computing,” Thain explained, “both of which start and end with people.” Thain then described the two phases researchers go through when they approach the CHTC for help; “first, they seek assistance and guidance on a problem they’re currently facing. Second, they realize they can do something revolutionary with high throughput computing (HTC).” + +A component of research computing using the CHTC tailored to scientists and researchers is that they don’t have to spend time supervising their programs running. Users can configure an [HTCondor Access Point](https://osg-htc.org/docs/submit/osg-flock/) to manage all their work, allowing them to essentially “submit it and forget it.” This compute system is similar to others in that any user can understand it and have it be reliable, “except ours has the extra touch of being a ‘submit it and forget it’ system,” Thain clarified. + +Similarly, the CHTC also created software for where the work runs, called an HTCondor Execution Point (EP). These Execution Points may be machines owned by other researcher providers and have different policies. + +Both researchers and research providers may have constraints; the goal then of HTCondor is to “manage and maintain these restraints; there are many users and researcher providers in the real world, and the CHTC is currently working on optimizing these individuals' wants and needs.” + +“This is a distributed problem,” Thain continued, “not because of the machines; it’s distributed because of the people.” Having distributed authority as opposed to distributed machines means that tools and policies are distributed. + +The implicit assumption is that all work can be divided into smaller, mostly independent jobs. In this way, “the goal is to optimize the time to finish running these jobs instead of the time to run a single one; to do this, we want to break up the jobs as much as possible so they can run in parallel,” Thain explained. The implication of this is there are a lot of different jobs, and how difficult it is to break them up varies. + + + +To mitigate this, [research computing facilitators](https://chtc.cs.wisc.edu/CHTC-Facilitation.html) (RCFs) work with users and researchers to overcome their specific problems. RCFs are different from a traditional “help desk;” their role is to interface with graduate students, PIs, and other researchers and guide them to find the best-fit solution for their projects. RCFs must have a broad understanding of the basic sciences to communicate with the researchers, understand their work, and give them useful and reasonable recommendations and other technological approaches. + +“The CHTC’s top priority is always reliability, but with all this work going on, the dream for us is scalability,” Thain described. Ideally, more loads would increase performance; in reality, it boosts performance a little, and then it plateaus. To compensate for this, the CHTC goes out of its way to make access points more reliable. “Adding access points helps to scale and allows submission near the user.” Thain notes the mantra: “submit locally, run globally.” + +As the CHTC is our on-campus laboratory for experimenting with distributing computing, the [Open Science Pool](https://osg-htc.org/services/open_science_pool.html) (OSPool) is a bolder experiment expanding these idea onto a national scale of interconnected campuses. + + + +The OSG and subsequent OSPool provide computing access on a national level in the same way that someone can access an available machine locally. For example, if the machines on campus are unavailable or all being used, users can access machines in the greater OSG Consortium. “But at the end of the day, all this computing, storage and networking research is in service to the needs of people who rely on high throughput computing to accomplish their research,” Thain maintains. “We hope the OSPool will be an accelerator for a broad swath of researchers in all kinds of disciplines, from all over the United States.” + +... + +*The full slideshow can be found [here](https://github.com/GregThain/talks/blob/master/2023misc/CHTC%20for%20Research%20Computing.pptx). Please click [here](https://chtc.cs.wisc.edu/uw-research-computing/index.html) for more information about researching computing within the CHTC, or visit [this page](https://chtc.cs.wisc.edu/uw-research-computing/get-help.html) to contact our RCFs for any questions.* diff --git a/2023-04-24-hannah.md b/2023-04-24-hannah.md new file mode 100644 index 00000000..9a840074 --- /dev/null +++ b/2023-04-24-hannah.md @@ -0,0 +1,130 @@ +--- +title: Get To Know Student Communications Specialist Hannah Cheren + +author: Shirley Obih + +publish_on: +- chtc +- path +- htcondor +- osg + +type: spotlight + +canonical_url: https://path-cc.io/news/2023-04-24-hannah/ +image: + path: https://raw.githubusercontent.com/CHTC/Articles/main/images/hannaheadshot.jpg + alt: Headshot of Hannah Cheren + +description: During her two year tenure with the Morgridge Institute for Research - Research Computing lab, Hannah Cheren made significant science writing contributions and along the way changed the direction of her life. + +card_src: https://raw.githubusercontent.com/CHTC/Articles/main/images/hannaheadshot.jpg +card_alt: Headshot of Hannah Cheren + +banner_src: https://raw.githubusercontent.com/CHTC/Articles/main/images/hannaheadshot.jpg +banner_alt: Headshot of Hannah Cheren +--- + +**During her two year tenure with the Morgridge Institute for Research - Research Computing lab, Hannah Cheren made significant science writing contributions and along the way changed the direction of her life.** + + +During her two year tenure with the Morgridge Institute for Research - Research Computing lab, Hannah Cheren made significant science writing contributions and along the way changed the direction of her life. + +Hannah is a senior undergraduate student in Life Sciences Communications and Statistics, simultaneously working towards a certificate in Data Science. She is a contributing writer for the Center for High Throughput Computing (CHTC) and the National Science Foundation funded PATh project, publishing 19 science and research computing articles describing high-throughput research computing and highlighting the researchers who utilize these organizations’ services. After her graduation this May, Hannah will be joining a public relations and communications consulting group for the life sciences as an Account Coordinator. + +Hannah takes her well-earned center-stage to share a bit about herself, experiences and professional trajectory so far, as well as her plans after graduation. + +**What piqued your interest in life sciences communication?** +I came to college intending to be a computer science major, but I immediately realized it wasn’t for me. I had a bit of a freak-out moment, but eventually made my way to the career advising office, where I was given a list of all the majors offered by the university so I could see all my options at a glance. + +Life Sciences Communication (LSC) stood out to me as an interesting route because I have always had an interest in writing and communications. I still felt like I didn't know much about LSC, so I reached out to Tera Wagner, the former Life Sciences Communication advisor, who really sold it to me. + + + +What drew me in was how different it is from journalism and other communications-based majors in the sense that you’re taught to take complex scientific information and translate it to a more easily digestible version that just about anybody can understand! + +How did you hear about / get started as a writer with the OSG/PAth communications team at Morgridge? +I learned about the job position from the advisor I just spoke about, Tera Wagner. She thought it +might be a good fit for me, and it turns out it was! + +**Why this position in particular?** + The job description captured my attention, and the interview process reinforced my interest, for sure. I remember being asked how well I could handle criticism, and while I was a bit stunned by the question, I knew I would be challenged and learn a lot in this role. As a writer, half the job is having people critique and edit your work. I knew this was the field I’d eventually like to go into, so learning to handle criticism this early in my career was a skill that I wanted to learn sooner rather than later. + +**How would you describe your experience so far working with the rest of the team?** +This job, in general, has been life-changing; it’s set me up for success in more ways than I expected. I remember the first couple of months were really challenging for me - this was my first “real” job, and even starting out, I felt like I had been thrown to the wolves. The summer of 2022 was a big turning point; I had more time to fully immerse myself and learn all I could, and started feeling a lot more confident. We had recently wrapped up HTCondor Week 2022, and within a couple of months, I had written and published seven articles about researchers from the event. It was a lot, but I became accustomed to how fast-paced this job could get, and it helped improve my efficiency, which I would say has really helped set me up for the real world. +In terms of ‘lows,’ I’m not sure what I would classify as a low. Honestly, it has all been a great learning experience. Even when things go wrong, I take it all in good stride. + +**Favorite story you’ve written to date and why?** +The Lightning Talks article was the one that I (not to be dramatic) felt like I put in my blood, sweat, and tears into. It was pretty intense because it involved interviewing and writing about work from 11 different researchers. The article ended up being really cool, and I'm very proud of it! +What kind of writer did you hope you’d become prior to starting and how has that changed in the time you’ve been here? +When I was younger, I was really into writing and reading. My dream job at the time was to be a novelist. I used to write all the time, from elementary school all the way to high school, so it has always been in the picture. +As I got older, I began to skew away from writing because I wasn’t sure how I could make a career out of it and it didn't seem to be a highly sought-after professional path, or so I thought. +But this experience has felt really full circle. I feel like this job has allowed me to find my “writing voice” again - while still maintaining the scientific theme - which has been exhilarating and inspiring for me. + I feel I have been able to come into my own as a science writer for PATh and I learned what was expected of me in this position. Writing, coupled with video editing and scheduling Tweets , helped me feel more comfortable with the organization and further hone in on technical and soft skills. + +**How would you say this position has helped you learn about High Throughput Computing (HTC)?** +It has helped a ton! I went from having no knowledge about HTC to enrolling in a class that teaches HTC because I have grown so much in my confidence. + +**Why do you think communication is important for the PATh project?** +The research that occurs within the PATh project is not only interesting, but so incredibly important within each field. Not only that, I think it’s important to communicate about this work in a way that people who aren’t in the field can understand it. By doing this, I hope to show researchers in all stages of their career or students who are interested in this type of work that it’s not all scary and complicated. Communicating about the PATh project, hopefully, motivates people who are already using HTC to stick with it and can encourage those who think it might be a good fit for their research to try it out. + +**What would you miss about your job when you leave?** +Oh my gosh, everything! I’ll, of course, miss the people I work with; I will miss my little cubicle where I can see everyone passing by and be near the people I work closest with. I will also miss the work - it’s true what they say; if you do what you love, you’ll never work a day in your life. I honestly get so excited to go to work because I just think what we do is so incredible. I’ll also miss the researchers - it’s been so great to be able to interview and interact with so many different kinds of people and learn about topics and research they’re passionate about. I’m so grateful for my time here and I’m excited about what else I get to do in between now and when I graduate! + +**What would be your advice to upcoming writers who also aspire to work in life science communications?** +This field is often fast-paced and can sometimes feel overwhelming. My advice is not to get discouraged by it; eventually, you’ll get used to it, and it’ll be part of your routine. Also, I think something that a lot of science writers experience in the beginning of their careers is “losing their voice.” Science writing can be very technical, and as a writer, it can sometimes be disheartening to sacrifice writing with your style to writing with more jargon to a specific audience. After a while, you’ll find your “science writing voice;” practice truly does make perfect, and with a little time (and lots of editing), you’ll begin to produce writing that sounds like you but still delivers on that science aspect. Speaking of editing, your writings may go through many fine-tuning rounds before publication. Try not to take it personally, and be confident in your writing! Take every piece of criticism as a learning opportunity and make the best out of it. + +**What is your hope for our industry?** +I hope to keep seeing a wide variety of people with different backgrounds and interests find LSC. I think many people see science communication and think they need a background in science and have to write all day, which couldn’t be farther from the truth. While I write a lot, I do it because I love it! However, people can go so many other avenues; from social media consulting to marketing, videography, lab work, genetics, social science research, and so many more; I can’t even name them all! For example, I’m currently conducting research using TikTok as my data source, which I didn’t even know would be a thing. I hope to continue to see this field continue to branch out and break down boundaries on what can be studied. + +**I’m curious about your research on Tiktok. Can you talk more about that?** +Yes! I’m currently writing a thesis on how Tiktok has become a platform for psychological polarization - political polarization, in particular. We’re seeing an app that was originally intended to be an entertainment platform become a hub for information, including science communication. This new type of content “blew up” during the height of the pandemic in 2020, when scientists and doctors discovered that creating short videos on TikTok was a great way to reach a wide variety of audiences. However, as COVID-19 became politicized in the media, it did the same on TikTok. What’s even crazier than this is these videos about COVID-19 and the vaccine seem to have polarized its users to an extent unlike anything we’ve seen before. I think that’s super interesting and extremely important to study. +This thesis was inspired by a book I read called Frenemies by Jaime E. Settle. She essentially studied the same thing I described but on Facebook. I thought Settle’s train of thought and reasoning were so interesting, but I remember finishing it and thinking, “too bad this isn’t going to matter in a couple of decades.” While this book really opened the door to this bigger conversation, Facebook is not a platform younger generations use. So, using her line of thinking, I wanted to conduct similar research using TikTok, an app that’s wildly more popular among my generation and younger and has users that regularly communicate about scientific issues. Saying that I do research on TikTok sounds a little silly, but I really do think that my work will be important for studying political polarization in the future! + + +**What do you think you have accomplished for PATh?** +I would like to think my work has given researchers something tangible to share with their families, friends, and peers about the details of their research. Everyone I’ve interviewed so far is doing such fascinating work, and my goal when I’m writing about it is to shine as big as a light on them and all their hard work as much as possible. With each article, I hope these researchers can read through my eyes how amazing all their accomplishments are and have a space where they can brag about it because they deserve to! +On the flip side, I hope that I show researchers who may think that HTC can advance their work that it’s possible to get started. You don’t need to be a rocket scientist or even a computer scientist to use these resources; anyone who can benefit from using HTC to make their lives just a little easier should absolutely try it. + +**How has your work here impacted how you think about your future and your goals?** +First and foremost, it has impacted how I think about science writing as not only an interest, but a possible career. I have learned so much and gained so much valuable experience and people seem genuinely curious about what it is I do. +The jobs I have applied to post-graduation are more science writing and market research-type jobs at life sciences companies – which even a couple of years ago isn’t the trajectory I thought I would follow. That being said, I couldn’t be happier in discovering my passion for this type of work - I love my job so much, and I definitely see myself doing something like this for a very long time! + +**Hannah outside of work:** + + +**When do you feel most like yourself?** +I love Madison, but I’m an east coast girl at heart; I’m from New Jersey, and spending time with my family there is so important to me. We have a very active seven-year-old dog and I love taking her on walks with my two younger sisters, who have always been my best friends! They’re both at school as well, and I love spending as much time as I can with them and my parents! + +**If you could have dinner with, interview, and write about one person, alive or dead, who would it be and why?** +Katherine Johnson. She was a mathematician at NASA and calculated trajectories that led Apollo 11 to the moon. She was also one of the first African American women to work at NASA. + +I was in highschool when the movie Hidden Figures came out. This movie tells the story of three young African American women working at NASA, including Katherine Johnson. I was in complete awe of Taraji P. Henson’s portrayal of Johnson, and I instantly became fascinated by her and her story. This movie was so inspiring as a young girl interested in pursuing studying in a STEM-related field, and Katherine Johnson, in particular, was a character who really stuck out to me. She passed away a couple of years ago, but I would’ve loved nothing more than to speak with her and express to her how much she had an impact on me as a girl in STEM! + +**If you had to describe your personality in a song, what would be the title?** +Bubbly! I’m a big optimist. + + + +**What animal intrigues you the most and why?** +Cows. We don’t see a lot of cows in New Jersey…so coming to Wisconsin and seeing them in fields every five minutes was so funny to me. I’ve had a running joke ever since that they’re my favorite animal, but now I think I tricked myself into actually believing it, so they intrigue me the most for sure! + +**Quick-fire questions**: +- Vacation or staycation? Vacation. I love to travel! I’m going to Italy to visit my sister abroad and Israel during the summer with my sisters and cousin for birthright, and I couldn’t be more excited. +- Tiktok or instagram? Tiktok. +- Rom-com, action, supernatural or horror movies? Action; my friends from home got me on a Marvel binge recently! +- Fine dining or casual? Casual. +- Favorite decade for music? This is going to be so boring, but I don’t think I have a favorite decade of music. Most of what I listen to is from this decade, though. My favorite artist currently is Quinn XCII. +- Thrifting or high street? Thrifting, for sure! diff --git a/2023-04-27-CDIS-bldg.md b/2023-04-27-CDIS-bldg.md new file mode 100644 index 00000000..571cd310 --- /dev/null +++ b/2023-04-27-CDIS-bldg.md @@ -0,0 +1,33 @@ +--- +title: Construction Commences on CHTC's Future Home in New CDIS Building + +author: Shirley Obih + +publish_on: + - chtc + - htcondor + - path + +type: news + +canonical_url: https://chtc.cs.wisc.edu/CDIS-bldg.html + +image: + +description: Breaking ground on new CDIS building +excerpt: + +card_src: +card_alt: + +banner_src: +banner_alt: +--- + +Breaking ground is as symbolic as it is exciting – a metaphorical act of consecrating a new location and the start of something new. On April 25th, UW-Madison broke ground on 1240 W. Johnson St., Madison WI; a location that will become the new building for the School of Computer, Data & Information Sciences and the new home for the Center for High Throughput Computing (CHTC) in 2025. + +“The new CDIS building is the latest crest in a wave of expansion and renewal enhancing the campus landscape to meet the needs of current and future Badgers,” [the university reports](https://news.wisc.edu/governor-chancellor-to-break-ground-on-new-home-for-uws-newest-school/). This building, expected to be nearly 350000 square feet, will be the most sustainable facility on campus and will create a new center of activity for UW, enabling important connections and establishing a tech corridor from Physics and Chemistry to the Discovery Building to the College of Engineering. + +CHTC Technical Lead Todd Tannenbaum wryly remarks that "while the 1960's charm of our current old building is endearing at times (isn't cinder block making a comeback?), I am inspired by the opportunity to work every day in a new and modern building. I am also especially excited by how this will open up new possibilities for collaboration across not only Comp Sci, but also the community of faculty and researchers in the Information School, Statistics, and Data Sciences." + +Read more about the extensive construction plans ahead, the budget, and how the project is being funded [here](https://news.wisc.edu/governor-chancellor-to-break-ground-on-new-home-for-uws-newest-school/). Launch a virtual tour of the building [here](https://cdis.wisc.edu/building/tour/). diff --git a/2023-10-24-GPARGO-CC*.md b/2023-10-24-GPARGO-CC*.md new file mode 100644 index 00000000..02e99dda --- /dev/null +++ b/2023-10-24-GPARGO-CC*.md @@ -0,0 +1,104 @@ +--- +title: Great Plains Regional CyberTeam Granted CC* Award + +author: Hannah Cheren + +publish_on: + - osg + - path + - chtc + +type: user + +canonical_url: https://path-cc.io/news/2023-10-24-great-plains-regional-cyber-team-granted-cc-award/ https://osg-htc.org/spotlights/great-plains-regional-cyber-team-granted-cc-award.html + + +image: + path: