Skip to main content
IT Service Status
IT Service Status

Behind the Science: A Conversation with Joe Mambretti, Director, International Center for Advanced Internet Research

The International Center for Advanced Internet Research (iCAIR), located on the Chicago campus, quietly supports some of the most ambitious scientific projects on the planet. From the Large Hadron Collider to global radio astronomy and bioinformatics, iCAIR’s facilities at Northwestern transport—literally—the world’s most complex and largest sets of scientific data nationally and internationally.

We sat down with Joe Mambretti, director of iCAIR, to talk about how it all began, how it works, and why the world’s scientists rely on specialized networks designed by an international consortium, including Northwestern.

Tell us about iCAIR and introduce your team members.

The International Center for Advanced Internet Research (iCAIR) was created as a worldwide collaboration to develop innovative networking services and technologies to support large-scale, data-intensive global science. The Center was established with the understanding that the challenges of creating new communication services and infrastructure will be overcome only through cooperative efforts by experts from multiple technology and science research communities.

With its research partners, iCAIR develops and implements a range of network innovations, specialized for science research, including software-defined networking techniques, high-performance protocols, Data Transfer Nodes (DTNs), wavelength switching, photonic and quantum communications, control systems, in-band computing, monitoring and measurement analytics, and AI automation and optimization.

 iCAIR staff includes Jim Chen, associate director (research and operations); Fei Yeh, research associate (research and operations); and David Martin, research associate (research).

Within IT, we work closely with Ruth Ann Ostrowski, director,  Cyberinfrastructure Service Operation, and her team;  Mike Korby, manager; James Panegasser, senior data center technician; Dan Daley, senior data center technician; and Veronica Durdov, vendor project specialist. We also work with Chris Fabri, associate director of network engineering, security, and development, in Telecommunications and Network Services.

In addition, the multi-organizational consortium that manages the StarLight International/National Communications Exchange Facility includes the Network Operations Center at Argonne National Laboratory, comprised of Linda Winkler, technical director, Bradon Siegal, network engineer, and Tom Costello, network engineer. Another major contributor is Phil DeMar, a consultant at Fermi National Accelerator Laboratory.

iCAIR works closely with international partners. How did these global collaborations begin?

Everything we do is collaborative, and science is global. iCAIR’s activities were motivated by an earlier need to connect supercomputing centers in the 1980s and 1990s, including Argonne National Laboratory, Fermilab, Northwestern, the University of Chicago, the University of Illinois Chicago, and the National Center for Supercomputing Applications. These organizations helped create the National Science Foundation’s NSFNET, a national backbone that interconnected regional research and education networks, providing access to supercomputers for universities and national laboratories.

Because the common internet cannot support the massive datasets these centers and related specialized instrumentation, such as telescopes and synchrotrons, needed to exchange, a consortium of scientists and network engineers began building specialized architectures, services, technologies, and networks for global data-intensive science.

In 1994, our consortium received NSF funding (with Ameritech) to establish the world’s first international research and education exchange point in Chicago. We used that exchange to create a seven-state regional research and education network, the Metropolitan Research and Education Network (MREN). In 1997, with NSF funding, we created an open international exchange that later became the StarLight International/National Communications Exchange Facility, a multi-organizational collaboration that includes research universities, national laboratories, and federal agency networks.

For those of us outside this world, what exactly is an open exchange point?

Open exchange points are facilities specifically designed to support specialized services for data-intensive science. Twenty-first-century science is based on extremely large collections of data generated at instrumentation sites worldwide and distributed globally. These sites are interconnected to compute centers, storage centers, and analytic facilities via high-capacity fiber-optic networks that form the core infrastructure enabling global scientific collaboration. Open exchange points are designed to interconnect these fiber-based science superhighways.

The Starlight Facility is one of the largest and most advanced exchanges, connecting to similar exchanges in Switzerland, Japan, South Korea, Singapore, and major cities across the globe, including  New York City, Washington DC, Seattle, Los Angeles, Amsterdam, and Montreal.

A golden age of science, technology, and new knowledge discovery has just begun; it will result in innovations that will transform the world.”

— Joe Mambretti, Director of iCAIR

Who uses the data that flows through these networks? Can any scientist access it?

Access is determined by the policies of specific science communities. If research is considered meritorious and aligned with a designated project, access will be granted. For example, the Large Hadron Collider (LHC) in Switzerland generates more than 200 petabytes of data per year. Soon, the High Luminosity LHC will be implemented, which will generate ten times that amount of data.

Because no single data center could analyze that much data, it is distributed to processing centers around the world through two private networks StarLight supports: the LHC Optical Private Network (LHCOPN), which provides direct fiber connections to Tier 1 centers, and the LHC Open Network Environment (LHCONE), which primarily connects Tier 2 and Tier 3 centers.

Several non-LHC science communities have also been enabled to use the LHCONE. However, all are focused on particle physics. A new international private network, MultiONE, is being prototyped for other large-scale data-intensive sciences.

With so much data moving around the world, how is storage handled?

Most storage services are provided by computation centers, using a blend of tape, spinning disks, and solid-state drives. With colleagues at Johns Hopkins University and the University of California, San Diego, we created the Open Storage Network, initially supported by the Gordon and Betty Moore Foundation.

What are some other areas of research that rely on iCAIR’s infrastructure?

We support almost all major large-scale global science projects today, including high-energy physics, radio astronomy, telescopes, bioinformatics, genomics, precision medicine, fusion energy, atmospheric sciences, high-luminosity synchrotrons, and, of course, AI for scientific discovery.

New instruments can generate staggering volumes of data, which general networks cannot transport. Specialized architecture, services, technologies, and global partnerships make large-scale global research possible.

Why is this work so important to Northwestern?

iCAIR's mission complements one of the University’s core missions—research for new knowledge discovery. The future of science research is analytics-based on extremely large amounts of data, which is growing at an accelerating rate. New techniques, especially those based on AI/ML/DL, enable investigation based on the integration of data among science communities, supporting large-scale interdisciplinary research.

Also, Northwestern has a particular orientation to international research, which is key to iCAIR’s mission. In addition, these activities directly relate to major technology policy initiatives.

Recently, the White House Office of Science has announced a Manhattan-style initiative called The Genesis Mission: Transforming Science and Energy with AI, which complements two related Department of Energy (DOE) projects, the Science Cloud and the Integrated Research Infrastructure project, which interconnects multiple DOE science facilities.

What other projects is the team involved in?

In partnership with an international consortium, iCAIR is designing and implementing a Global Research Platform (GRP)—a distributed environment for next-generation, data-intensive science that will support more than 100 times the data capacity of today’s computational science ecosystems. The GRP is a partner to several other research platform projects, including the NSF’s National Research Platform, the Asia Pacific Research Platform, and other research platforms in Asia, Europe, and other parts of the globe.

Several of our research consortia are also currently prototyping international 1.6 Tbps WAN end-to-end global network services for high-energy physics and other data-intensive sciences.

We support multiple large-scale computational and network testbeds, including the NSF Chameleon Cloud testbed, which is being used by over 10,000 computer scientists, and the NSF’s national FABRIC networking research testbed.

Working with Northwestern’s Center for Photonic Communication and Computing, Argonne National Laboratory, and Fermi National Accelerator Laboratory, iCAIR developed and currently operates a metro-area quantum networking testbed on which multiple successful quantum teleportation experiments have been conducted. This quantum network is being designed to support multi-institutional quantum computers, which will be closely integrated with supercomputers.

What is your favorite place on campus?

 Abbott Hall. It is a fine example of Art Deco architecture with an interesting history. Originally a dormitory in 1940, it housed naval officers training to fly planes from aircraft carriers during WWII.