Poster Presentations involve presenters showcasing key aspects of their research via a poster, where attendees and judges will have an opportunity to interact with them and ask questions. Posters should be submitted to the CUPC 2024 organizing committee by 10:00 AM on Friday, 25th October, 2024, and students with posters will be assigned spots within the space shortly before presentation time. Presenters should be available for the entire duration of the poster presentation event, but are encouraged to glance around at the posters of their peers!
Title | Presenter | ||
The Effects of a Heavy Jupiter-like Planet on Earth’s Orbit | Aidan Mikael Mohammed | ||
Abstract: | Various astronomical models describing the evolution of our solar system determined that the orbit of Neptune was initially closer to the Sun than Uranus. Eventually, due to the gravitational interactions with both Jupiter and Saturn, the orbits of Uranus and Neptune become destabilized, causing these planets to exchange locations, and the orbital radii to increase drastically. This work examined and quantified the effects of a Jupiter or a heavy Jupiter-like planet of mass m_{JL} on Earth’s orbit in a plane, without considering Milankovitch effects resulting from Earth’s obliquity (tilt) and precession (wobble). The effect of a Jupiter-like planet on Earth’s orbit only becomes significant if m_{JL} is much greater than the actual Jupiter mass m_{J}. We also considered the impact of Jupiter on an Earth-like planet located beyond Jupiter. The orbit of the Earth-like planet becomes very unstable if its closest approach to Jupiter is within several hundred million kilometers. This demonstrates why small planets of Earth’s mass are not found between the heavy outer planets. The differential equations governing the planetary orbits described by Newton were solved using Runge-Kutta methods and were implemented by a MATLAB computer program. | ||
Impure Relationalism in Cosmology | Siddhartha Bhattacharjee | ||
Abstract: | A structural approach to theoretical physics reveals that genuine cosmological models must be relational in a manner that is roughly laid out in Leibniz’s framework of monadology. In an event ontology, this has the implication that there cannot be global symmetries in spacetime, which is verified for compact cosmological solutions in general relativity.
However, since spacetime and 4-momentum are dynamically paired in a Hamiltonian treatment, it is possible to interpret 4-momentum as an intrinsic quantity that nonetheless has causal efficacy, thus motivating the energetic causal sets program in quantum gravity. This form of impure relationalism further allows the possibility of a preferred, irreducible notion of universal time and reference frames only measurable at the cosmic scale, thereby compatible with the empirical findings of general relativity while opening the possibility of describing a fundamental theory from which general relativity and quantum mechanics may emerge for special subsystems and levels of description. |
||
The AdS/CFT Correspondence: From Spacetime Geometry to Quantum Field Theories. | Mark Choi | ||
Abstract: | Juan Maldacena’s groundbreaking 1997 paper, “The Large N Limit of Superconformal Field Theories and Supergravity,” introduced the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence, a pivotal concept for understanding quantum gravity. This correspondence proposes a duality between gravitation in Anti-de Sitter (AdS) spacetime and conformal field theories (CFTs) on its lower-dimensional boundary. Within this framework, I present the foundational concepts of AdS spacetime geometry and the AdS/CFT correspondence. AdS spacetime is characterized by hyperbolic geometry in its spatial dimensions and includes an additional timelike dimension, satisfying Einstein’s field equations, while CFTs, well-known quantum field theories of elementary particles, reside on the boundary. The AdS/CFT correspondence suggests that the gravitational theory in the bulk (the interior of the spacetime) can be described equivalently by these quantum field theories. Such duality resembles the holographic principle, which states that the theories inside a volume can be encoded on its boundary. Understanding the CFTs of particles can lead to a full description of spacetime, as information is preserved according to quantum principles. Finally, I discuss further implications for quantum gravity and applications of the correspondence. | ||
Optimizing Crab Cavities with Higher Order Mode Dampers for the Electron-Ion Collider | Paul Graham | ||
Abstract: | Crab cavities enhance particle collision rates in accelerators by rotating particle bunches.
However, higher order modes (HOMs) in these cavities can cause heating, beam instability, and reduced performance. This work moves toward optimizing the placement of coaxial HOM dampers in the 394 MHz crab cavity for the Electron Ion Collider (EIC). Controlling HOMs will improve cavity performance and beam quality, advancing the EIC’s ability to achieve precise, high luminosity collisions |
||
The Importance of Physics Outreach in Attracting More Students to Physics | Michaela Hishon | ||
Abstract: | “Physics isn’t for me, only geniuses can be physicists” is a phrase we unfortunately hear far too often in prospective physics students. At the University of Guelph, the physics department’s outreach team addresses this mindset and more through a variety of strategies from outreach events to specialized video content. Since promoting the public’s attitude towards physics benefits the physics community as a whole, it is in our best interest as physicists to approach this mindset shift through the lens of a physicist; by creating a plan and executing a possible solution.
Rooted in community engagement initiatives, over 1100 students participated in outreach events with the Guelph Physics’ 7m wide inflatable planetarium in Summer 2024. In addition, customized video content series were created to introduce physics faculty to the student body, and to share physics students’ journeys and insights. By relaying relevant information in an accessible manner, Guelph Physics works towards inspiring curiosity in audiences that can overall promote their attitudes towards physics as a whole. |
||
Determination of the Superfluid Density in the Vortex State of Sr2RuO4 by μSR | Zoe Kartsonas | ||
Abstract: | Despite 30 years of intense study, the pairing symmetry and related gap structure of the type-II superconductor strontium ruthenate (Sr2RuO4) are still not known. A recent paper by Khasanov et al. provides evidence for line nodes in the superconducting gap based on transverse-field muon spin rotation (TF-μSR) measurements of the temperature and magnetic field dependences of the in-plane magnetic penetration depth (λab) in the vortex state of Sr2RuO4 single crystals. In particular, λab is reported to exhibit a T-linear dependence in the low temperature limit, indicative of pronounced low-energy quasiparticle excitations associated with gap nodes. This is in contrast to an earlier TF-μSR study of Sr2RuO4, which found no significant temperature dependence for λab at low temperatures. We argue that these different findings are not due to improvements in sample quality, but rather are the result of different data analysis methods. By comparing four different analysis methods for determining λab (T, B) in the vortex state of Sr2RuO4 from TF-μSR measurements, we show that the spatial variation of magnetic field in the vortex state of Sr2RuO4 is extremely unusual. This finding suggests that the superconducting state of Sr2RuO4 has a unique but elusive unconventional order parameter. | ||
Ab initio Combined Neutrino Mass Limits from Neutrinoless Double Beta Decay | Taiki Shickele | ||
Abstract: | Neutrinoless double-beta decay is a hypothetical second-order weak process that involves the decay of a pair of neutrons into two protons and two electrons. Observation of this decay will point to a Majorana nature of the neutrino, lepton number violation, the absolute mass scale of the neutrino and possibly further new physics. Crucially, constraining neutrino masses from current and next-generation experiments requires the use of nuclear matrix elements, which until now have only been obtainable through phenomenological methods. However, recent developments have made these matrix elements accessible through ab initio nuclear theory.
Using a Bayesian approach, we combine likelihoods from leading experiments to obtain a global neutrino mass constraint from ab initio nuclear matrix elements. Furthermore, utilizing a simple Poisson counting analysis, we construct the combined sensitivity reach from several next-generation experiments. Limits are also computed for a heavy sterile-neutrino exchange mechanism instead of the standard light-neutrino exchange, which arises in many theories beyond the Standard Model. These constraints allow us to determine the total physics reach of all neutrinoless double-beta decay experiments combined, better informing our exclusion reach on the absolute mass scale of the neutrino. |
||
A Compton Suppression Spectrometer for nuclear safeguards and security applications | Maggie Berube | ||
Abstract: | Proper identification of radioisotopes present in a sample is crucial to nuclear safeguards and nuclear forensics, as well as many other fields. Gamma spectroscopy is an important technique used to achieve that goal. One of the challenges of gamma-ray spectrometers is the Compton continuum, caused by Compton scattered photons interacting with the detector. The Compton continuum is an unwanted detector response that conceals low energy and low activity photopeaks, preventing the proper identification of radioactive isotopes in the investigated sample and making analysis difficult, if not impossible. One solution to this problem is the Compton Suppression Spectrometer (CSS) technique, where a system of two detectors, one with high resolution and one with high efficiency, work in coincidence. This poster presents the current progress on developing a CSS at the Canadian Nuclear Laboratories (CNL) in Chalk River, including the detector assembly, the data acquisition system, and anti-coincidence analysis methods. | ||
High-Performance Seismometers based on Parallel Dipole Line Magnetic Trap | Savero Lukianto Chandra | ||
Abstract: | High-performance seismic sensors are critically needed in many seismically active regions around the world to support seismic monitoring and mitigation activities and provide a high volume of high-quality data to support artificial intelligence and deep learning research in seismology. Unfortunately, currently, high-performance broadband seismometers are very expensive. This study reports the development of a new generation of high-performance earthquake sensors based on parallel dipole line (PDL) magnetic traps. The sensor consists of a pair of dipole line magnets and a levitating graphite rod that sits in a unique camelback potential that serves as a very low frequency (~0.45 Hz) oscillator, suitable for seismic sensing. This sensor demonstrates exceptional sensitivity, detecting low-frequency and low-amplitude signals from distant M6 earthquakes up to 17000 km while offering a tenfold cost reduction compared to conventional broadband sensors.
We also investigated a unique self-calibration mechanism for PDL sensors, associated with its extremely high sensitivity, to detect earth’s tides. We evaluated continuous waveforms, corresponding to the ground’s tilt, recorded by PDL sensors for 30 days. This unique feature, which is not available in conventional short-period seismometers, can be utilized for the sensors’ self-calibration and performance status validation. This is an important feature as many of these sensors will be deployed in remote locations including at the ocean bottom. Our findings establish the PDL sensors as a new class of seismic sensors based on magnetic traps with performance between short period and broadband sensors. |
||
Identity Crisis: Distinguishing Between Isomers & Ground States With TITAN’s Time Of Flight Analyzer | Ellen Brisley | ||
Abstract: | Mass is a unique signature that helps us understand the nucleus. TRIUMF’s Ion Trap for Atomic and Nuclear science (TITAN) conducts precise mass measurements of exotic nuclei in order to explore nuclear structure theories. TITAN’s Multiple-Reflection Time-of-Flight Mass Spectrometer (MR-ToF-MS) is capable of performing precise mass measurements on nucleons of almost an identical mass (isobars).
Neutron deficient isotopes of cesium and barium have been measured at TITAN. These masses present unique experimental challenges; therefore previous measurements present high error or have only been theoretically estimated. Additionally, nuclei with identical numbers of protons and neutrons can differ in their half-life and these variations are called isomers. For the cesium isotopes studied, there are several unmeasured isomers that could shed light on nuclear structures. Using data from the MR-ToF-MS, we identify peaks, perform shape and mass calibrations, and fit exponentially-modified gaussians to the peak shapes to make precise mass measurements. Due to MR-ToF-MS’s limits on resolving power, we use a hypothesis test technique to identify the presence of Cesium isomers. Our methodology enables conclusions to be drawn on the presence, or not, of isomers amongst ground state isotopes. Moreover, we achieve first time mass measurements of neutron-deficient cesium and barium to contribute to the mapping of nuclides. |
||
Modelling GaN and InGaN Nanowires with Non-Uniform Dopant Concentrations | Michael Baker | ||
Abstract: | Simulation of GaN and InGaN nanowires (NWs) with one axial p-n junction is relatively straightforward, and there exist many computational tools which are well-suited to handle such systems in 1D, 2D or 3D. The introduction of non-uniform dopant concentrations, for example radial p-n junctions, can complicate matters significantly. Notably, a 1D analysis can no longer be simply applied. The NWs in this work were grown with molecular beam epitaxy; a non-uniform concentration of dopant impurities was suspected first when electron holography phase data showed a smaller than expected axial built-in potential in the NW. These suspicions were confirmed when atomic probe tomography revealed no detectable p-type Mg dopants in the bulk of the NW where a concentration of 3 ppm was expected. This paper addresses a so-called ‘core-shell’ NW dopant system and presents methods of modifying 1D and 2D simulations to fit this new problem. | ||
Designing an Algorithm for the Control of an Inverted Pendulum | Mattia Altomare | ||
Abstract: | Simulation of GaN and InGaN nanowires (NWs) with one axial p-n junction is relatively straightforward, and there exist many computational tools which are well-suited to handle such systems in 1D, 2D or 3D. The introduction of non-uniform dopant concentrations, for example radial p-n junctions, can complicate matters significantly. Notably, a 1D analysis can no longer be simply applied. The NWs in this work were grown with molecular beam epitaxy; a non-uniform concentration of dopant impurities was suspected first when electron holography phase data showed a smaller than expected axial built-in potential in the NW. These suspicions were confirmed when atomic probe tomography revealed no detectable p-type Mg dopants in the bulk of the NW where a concentration of 3 ppm was expected. This paper addresses a so-called ‘core-shell’ NW dopant system and presents methods of modifying 1D and 2D simulations to fit this new problem. | ||
Position Calibration for the MATHUSLA Test Stand | Bennett Winnicky-Lewis | ||
Abstract: | The Standard Model (SM) has had great success at describing the physics of our subatomic world. However there are still many shortcomings of this model, including the absence of a dark matter particle. The proposed MATHUSLA detector at CERN aims to search for long-lived particles (LLPs) created by the proton-proton collisions in the LHC during the accelerator’s high luminosity run. This detector will search for neutral LLPs decaying into charged SM particles. Current detectors such as CMS and ATLAS would miss these particles as they are too close to the beamline, so the proposed detector would be located on the surface, around 80 meters above the collision point. In order for MATHUSLA to be successful, the tracking of the particle trajectories must be very accurate. The detector will consist of multiple layers of scintillators with wavelength shifting fibers (WLSFs) transporting the photons that are produced to silicon photomultipliers for readout. It was found that the curvature of the WLSF has an impact on the effective index of refraction, along with the quality of the WLSF. For this reason, each fiber in the detector will need to be calibrated for accurate particle reconstruction. This presentation will look at the process of calibrating the position reconstruction in the prototype test stand constructed at the University of Victoria. The effective index of refraction was found to be 1.127 +/- 0.013 times the index of refraction for the WLSF used in the prototype test stand. | ||
Permanent Magnets and Magnetic Field Reconstruction for Stability in Qutrit Analysis | Kathleen Nicole Dunn Tamura | ||
Abstract: | In our lab we create neutral atom qutrits of rubidium atoms where the different energy levels represent different quantum computing states. Due to Zeeman splitting, the separation of energy levels is dependent on the magnetic field being applied. There are two methods to obtain stability in a magnetic field: decreasing fluctuations in the field and monitoring fluctuations. For decreasing fluctuations I attempted to implement permanent magnets that would produce magnetic fields of approximately 2.5 Gauss. The holders of the magnets I designed are currently in the process of being implemented including a Helmholtz coil to cancel the field of the magnet when necessary. For monitoring fluctuations I attempted to create a magnetic field reconstruction using multiple magnetometers and the Taylor expansion of the magnetic field. Through testing the magnetic field reconstruction it seems that in order to accurately approximate the field it must be a force-free linear field. Considering we will have multiple coils within the reconstruction we do not fit this criteria. However, we are currently testing a possible combination of the reconstruction and a simulation of the coils in order to approximate the field. | ||
Structural Geology Meets Deep Learning | Simon Ghyselincks | ||
Abstract: | Structural geology examines the form, arrangement, and distribution of rocks and strata within the Earth’s crust, playing a crucial role in subsurface modeling and mineral exploration. However, creating accurate subsurface models is a complex, ill-posed problem, hindered by the scarcity of ground truth field data and the uncertainties associated with interpreting geological processes that span vast temporal and spatial scales. Traditional methods for generating 3D geological models are labor-intensive, typically resulting in a single, static representation that fails to capture the full range of potential geological structures.
This work presents a new approach to automating the generation of plausible 3D structural geology models using a geophysical computation engine coupled with random variables. The synthetic data generated by the engine is then used to train a flow-based generative AI model via stochastic interpolation. New 3D geological structures are sampled using a numerical ODE solver operating in a high-dimensional velocity field. This velocity field is learned as part of an optimization problem and parameterized with a deep neural network. The resulting models have broad applications, including the production of training data for geophysical modeling, such as geomagnetic and gravity inversions, and filling in missing data within sparsely sampled datasets. A key challenge in this methodology is generating a sufficiently diverse starting dataset that accurately reflects the heterogeneity and complexity of natural geological formations. To address this, we simulate the formation, deformation, erosion, and intrusion of stratigraphic layers over time using a Markov chain process. This comprehensive approach offers a scalable solution for generating complex geological models that can be integrated into a variety of geophysical applications. |
||
Optimising cryogenic vibration isolation for a superfluid dark matter detector | Ashwati Sanjay | ||
Abstract: | Dark matter has not yet been detected directly, which leads some to turn to alternative theories and experiments. Ultralight dark matter sits on the lighter side of DM theory, containing bosonic particles at very low masses, leading them to behave as a classical wave that can be detected through the oscillating motion of baryonic matter1,2. The Helium ultraLIght dark matter Optomechanical Sensor (HeLIOS) uses the high-Q acoustic modes of superfluid helium-4 to resonantly amplify this signal3. The vibration isolation from the environment is critical for this experiment to potentially detect dark matter. We use a mass-spring system functioning as a low pass filter and eddy current dampers to achieve the noise reduction required. We show that it is possible to integrate a thermal connection and vibration isolation in our mass-spring system, as well as low temperature damping mechanisms in a cryogenic environment.
[1] PRL 116, 031102 (2016). [2] PRA 97, 042506 (2018). [3] PRD 109, 095011 (2024). |
||
Quantum fluid meets Nanotechnology: Behaviour of Superfluid Helium-3 in Confined Geometries | Leyla Saraj | ||
Abstract: | Superfluidity is a low-temperature phenomenon where liquids exhibit near zero friction and the ability to climb up container walls. Particular interest is taken in the helium-3 isotope, where superfluidity occurs below a few milli-Kelvin in various types called phases. In recent groundbreaking experiments using nanoscale devices, exotic phases of superfluid helium-3 have been observed. Which phase is thermodynamically stable depends on the device geometry and can be determined by minimizing the associated free-energy cost function. Computing the free energy has been done for special cases, but there is no general solver. We are developing a software package to assist in this task. The program will allow for an arbitrary geometry to be specified and use a finite element method solver to minimize the free energy to determine the phase of helium-3. Example geometries and their ensuing phases will be presented in the talk. | ||
Exploratory Investigation of Microplastic Detection in Municipal Organic Waste | Ellitot Andew & Emma Trotta | ||
Abstract: | The Kamloops Residential Organics program is a new program in Kamloops instilled to help reduce and reuse food waste throughout the city. Compost and organic waste are valuable resources that can help us achieve several sustainability objectives. Many municipalities are recognizing these benefits and implementing organic waste programs, but a major hurdle to implementation and use of municipal compost is contamination. Plastics are one of the most common and widespread types of contamination in municipal compost, and to manage and come up with solutions, we need accessible and accurate ways to identify and quantify the plastics currently found in organic waste. There are several methods that have been used to detect microplastics in organic waste, and most can fall under three main categories of microscopy, spectroscopy, and thermal degradation methods.
Our poster will focus on two main objectives. First, we will explore the use of microbes already present in the Kamloops municipal food waste to help break down and eliminate microplastics contamination. Some bacteria produce enzymes capable of altering the carbon chains of microplastics, potentially using them as an energy source. Second, we will investigate various analytical detection techniques, to improve the accuracy of identifying and quantifying microplastics in organic waste. We aim to evaluate the effectiveness of bacterial degradation as compost matures and integrate this with advanced detection methods to enhance overall compost quality and address contamination issues. This project is a collaboration between students and faculty from the Physics, Biological, and Natural Resource Sciences, focusing on using biophysics to enhance microplastic detection and degradation methods. |
||
Improved Diagnostics for TRINAT Spin Polarization | Izzy Kim | ||
Abstract: | TRINAT, TRIUMF’s Neutral Atom Trap, uses a magneto-optical trap to observe beta decay in well-polarized atoms. The quality of the polarization can be determined using the fluorescence of the atoms in the trap during the optical pumping period. A fast CMOS camera is used to image the trap through the optical pumping. During my term, I characterized timing specifications and improved diagnostics for the camera. It was then used to perform spectroscopy on the 4S1/2 to 4P1/2 transition of 41K and to calculate the quality of linear and circular polarizations. Good spectroscopic precision was achieved by locking to the frequency of one hyperfine transition; this would change the hyperfine pumping rates as the other transition was scanned. Accuracy was limited by several potential systematics. The polarization was achieved by fixing the laser at one hyperfine transition and using a fiber-coupled EOM to provide the other. However, the quality was limited by the EOM insertion loss, as the rate of optical pumping became too long to fully polarize the atoms in the necessary timescale. Changes in the laser scheme informed by these results will aid in TRINAT’s upcoming 37K experiment, allowing us to optimize the optical pumping scheme and to observe the effects of hyperfine pumping and the AC Stark shift within the atoms. | ||
Towards a transfer learning strategy for ultrasound-based automatic segmentation of organs-at-risk in gynecologic brachytherapy | Marusia Shevchuk | ||
Abstract: | Gynecologic cancers are among the most common cancers in women worldwide. High-dose-rate brachytherapy is an effective component of treatment for many of these cancers and often involves the insertion of an applicator through the vagina to deliver radiation directly to cancerous cells. A crucial aspect of planning brachytherapy treatments is accurately outlining organs-at-risk (OARs), such as the bladder and rectum, to minimize radiation exposure to these healthy tissues. This process can benefit from the use of ultrasound (US), owing to its cost-effectiveness and accessibility. However, manually segmenting OARs is time-consuming and labour-intensive, which can increase patient risk. Therefore, there is a need for an automatic segmentation tool to streamline the OAR segmentation process. The objective of this study was to develop a pipeline to automatically segment the rectum, a key OAR, in prostate brachytherapy three-dimensional US images, and apply this method to gynecologic images. This transfer learning approach was utilized due to the availability of data and is supported by the similarity in rectal appearance across the images. This study employed the no new U-Net (nnU-Net), a robust, open-source segmentation algorithm. This algorithm generates several U-Net configurations and selects the best configuration to produce test predictions, which are then compared to clinician-defined ground truth segmentations. The resulting Dice Similarity Coefficient was 0.67 +/- 0.066, reflecting a good overlap between the ground truth and algorithmic segmentations. The accuracy was 0.97 and the precision was 0.81, indicating that the algorithm correctly identified most true positives with minimal over-prediction. The use of an automatic segmentation tool in both gynecologic and prostate brachytherapy could improve efficiency and reduce patient risk. Moving forward, this study will focus on improving the results by utilizing an enhanced GPU, applying this method to a gynecologic dataset, and expanding its application to other OARs. | ||
Calo4pQVAE: A Quantum-assisted 4 Partite VAE Surrogate for Particle-Calorimeter Interactions | Ian Lu | ||
Abstract: | With the approach of the High Luminosity Large Hadron Collider (HL-LHC) era set to begin particle collisions by the end of this decade, it is evident that the computational demands of traditional collision simulation methods are becoming increasingly unsustainable. Existing methods, which rely heavily on first-principle Monte Carlo simulations for modeling event showers in calorimeters, are projected to require millions of CPU-years annually—far exceeding current computational capacities. This pressing bottleneck presents an exciting opportunity for advancements in computational physics through the integration of deep generative AI with quantum computing technologies. We propose a Quantum-Assisted deep generative surrogate founded on a variational autoencoder (VAE) in combination with an energy Conditioned Restricted Boltzmann Machine (RBM) embedded in the models latent space as a prior. We created a transformation that maps the topology of D-Wave’s Zephyr Quantum Annealer into the nodes and couplings of a 4-partite RBM. After the completion of our training process, we then load our model’s classically learned representations onto D-Wave’s Zephyr Quantum Annealer to significantly accelerate our shower generation times. Furthermore, we implemented a mirrored hierarchical architecture with 3D convolutions in our VAE. By feeding different levels of encoded input representations into the partitions of our RBM and establishing skip connections between the submodules of the VAE’s encoder and decoder, our model achieves high accuracy in the reconstruction of event showers. By integrating classical and quantum computing, this hybrid framework paves way for the utilization of large-scale quantum simulations as priors in deep generative models. Through this approach, we demonstrate the speed and performance capabilities of our quantum-classical hybrid models with significantly faster sampling times while maintaining high-quality in the generation of particle-calorimeter event showers. | ||
“Is it a bird? Is it a plane? No, it’s (probably) a baloney sandwich!”: identifying space trash in the night sky | Abdullah Bajaber & Emily Lau | ||
Abstract: | Space debris, remnants of defunct satellites, spent rocket stages and other discarded materials, pose a growing threat to our orbital environment. Since the 1950s, government programs have promoted space flight capabilities as well as the installation of artificial satellites in Earth’s orbit. Yet, as humanity pushes forward in its quest for the stars, the number of orbital launches has skyrocketed over the last decade, reaching a peak of 224 launches in 2023 alone. This increase has led to a more crowded orbit, heightening the risk of collision events that threaten crucial orbital activities such as radio telecommunications, weather forecasting and deep space observation.
Adequate tracking and monitoring of space debris populations is essential for preventive measures like the orbital adjustment of satellites and early warning systems for manned spacecraft, such as the ISS. However, traditional techniques of tracking and observation have struggled to map the majority of this debris, partly due to its overwhelming presence in terrestrial orbit and the wide variation in sizes and dimensions that make certain debris elusive to detection. In this paper, we present an alternative approach for detecting large debris fragments using the Fujifilm X-T3, a Digital Single-Lens Reflex (DSLR) camera popular with professionals and enthusiasts alike. The goal is to optimize tracking resources by incorporating commercial technology, encouraging amateur astronomers and the general public to contribute to debris monitoring networks while allowing observatories to concentrate on less conspicuous debris. We investigate the feasibility and potential of monitoring space debris through repeated imaging of the night sky, active cross-referencing with known satellite flybys, and photometric analysis. Our results demonstrate the viability of debris detection with modest equipment and highlight the potential of citizen science in advancing current tracking methods. |
||
Comparing Model Representations of Diffusive Mixing in the Arctic Ocean | Joshua Lerner | ||
Abstract: | The Arctic Ocean is a unique environment that is changing faster than anywhere else on Earth, with important implications for the climate system. Climate models exhibit systematic biases in their representation of these changes, failing, for example, to accurately reproduce the observed sea ice evolution (Notz et al. 2020), upper Arctic Ocean properties (Muilwijk et al. 2023), and water properties of deeper layers (Heuze et al. 2023). It is important to understand how the representations of different processes in these models compare, not just with each other but also with observational data, in order to build more robust and reliable climate models in the future. Here, we examine how three climate models represent the mixing of temperature and salinity in the Arctic Ocean: the Arctic Subpolar Gyre State Estimate (ASTE), the MIT General Circulation Model (MITgcm), and the Arctic and Northern Hemisphere Atlantic (ANHA) configuration of the Nucleus for European Modeling of the Ocean (NEMO). We attempt to characterize mixing by analyzing the vertical diffusivity, gradients, and diffusive fluxes of temperature and salinity output by the models in key regions throughout the Arctic Ocean. This characterization suggests that the ANHA configuration of NEMO prescribes high mixing rates in the mixed layer, resulting in an overly smooth vertical temperature and salinity structure. Furthermore, by comparing these diagnostics seasonally, we identify a potential bias in this configuration to overestimate sea ice formation. Additionally, applying an observationally informed background mixing rate to the MITgcm model appears to partially replicate the effects observed in the climate reanalysis used in ASTE. The findings point to possible discrepancies between models and their representations of Arctic Ocean’s physical properties, suggesting a need for a greater effort to understand how climate models represent mixing in these rapidly evolving regions. | ||
Automated Beam Tuning Methods at TRIUMF | Alexander Katrusiak | ||
Abstract: | The TRIUMF automated beam tuning program has developed an end-to-end tuning procedure. Model Coupled Accelerator Tuning (MCAT) uses a linear envelope code, TRANSOPTR, to compute linear optics sequentially and optimize settings to produce specified beam characteristics (waists, chromaticity, dispersion, and so on). This program allows for live online feedback of beam properties along transport and acceleration. Corrective steering is further optimized using the Bayesian Optimization for Ion Steering (BOIS) method, which maximizes Faraday cup current on a given beamline section after MCAT sets all other optical elements. BOIS uses a Gaussian process as a surrogate model of the objective problem. This allows for rapid training from exclusively online data, giving users low overhead when initially deploying this method on their system. MCAT has been used extensively to tune, design, and diagnose beamlines across TRIUMF-ISAC, and is now part of standard operatonal procedure. BOIS has been used to tune ISOL beams from ISAC targets, as well as stable beams from ISAC’s OffLine Ion Source (OLIS), through both low and high energy beamlines. Recent developments include Multi-Objective Bayesian Optimization (MOBO) and applications to experiment specific beam tuning procedures at DRAGON. | ||
Testing a New Method of Constraining Standard Model Effective Field Theory with Physics-Based Machine Learning | Kye Emond | ||
Abstract: | Standard Model Effective Field Theory (SMEFT) is a formalism that introduces parameters known as Wilson Coefficients to model new physics. We evaluate a new method to constrain the possible values of two such Wilson Coefficients that affect the production and decay of Higgs bosons in the ATLAS experiment at the Large Hadron Collider. Previous work determined constraints for these coefficients using likelihoods constructed by binning events and comparing the binned counts to theory predictions. Our method promises to produce stronger constraints by constructing a likelihood which processes each event individually, rather than the binned totals. An important part of this construction is analytically intractable, so we train deep neural networks to approximate it. Although we find evidence that this method could indeed produce stronger constraints on any proposed theories, we also find that standard feedforward neural networks fail to train to the required accuracy with a variety of architectures. | ||
Probing Electron Self-Energy in the Kagome Superconductor CsV3Sb5 Using Angle-Resolved Photoemission Spectroscopy (ARPES) | Parvin Aliyeva | ||
Abstract: | In this study, we focus on the analysis of self-energy in the kagome superconductor CsV3Sb5 using angle-resolved photoemission spectroscopy (ARPES). ARPES is a technique that allows us to deduce the electronic structure of a material by measuring the kinetic energy of photoelectrons emitted in all directions after being exposed to monochromatic light. From the kinetic energy of the photoelectrons in the vacuum, we extract the energies and momenta of electrons in the material based on conservation laws. CsV3Sb5 is particularly interesting due to its electronic properties arising from the underlying kagome lattice as well as the coexistence of both charge density wave and superconductivity with putatively unconventional characters. To understand what drives these phases, it is critical to analyze the interactions between electrons within the material. Electron self-energy is important in this sense, as it captures the renormalized energy experienced by the electrons as opposed to the band energies predicted by non-interacting models. By analyzing the self-energy, we can gain insights into various many-body interactions such as electron-phonon couplings, which are fundamental to the material’s properties.
After correcting the measured energies and converting from angle to momentum, we leverage the sensitivity of ARPES to the single-electron-removal spectral function to access this self-energy in an energy- and band-resolved way. Through this, we find that the electron-phonon coupling in CsV3Sb5 is particularly strong in the kagome lattice-derived bands most affected by the transition to the CDW state, supporting the important role of band-dependent electron-phonon coupling in this system. |
||
Investigating Time-to-Digital Converters for General Fusion’s Time-of-Flight Neutron Spectrometer | Zeest Fatima | ||
Abstract: | General Fusion’s LM-26, a next-generation Magnetized Target Fusion (MTF) experiment, is designed to demonstrate the repeated compression of magnetized plasmas to achieve fusion conditions. A crucial diagnostic tool for this experiment is the multilayer coaxial time-of-flight (MCTOF) neutron spectrometer, developed at TRIUMF. This spectrometer measures plasma ion temperature during peak compression by analyzing the energy of neutrons released from the deuterium-deuterium fusion reaction.
The spectrometer utilizes two layers of Silicon-Photomultipliers (SiPMs) paired with scintillators to detect neutron collisions. By recording the time between neutron impacts on each layer and using time-to-digital converters (TDCs) for data acquisition, the energy of the neutrons can be accurately calculated, providing insights into the plasma’s temperature. The spread in neutron energy, due to the Thermal Doppler Effect, is used to infer the plasma ion temperature, enabling a real-time evaluation of fusion performance. The investigation focuses on the use of TDCs in this setup, addressing challenges such as SiPM short pulse widths, dark noise counts, and single photon timing resolution. This work aims to refine the measurement techniques to achieve high temporal resolution, critical for the success of the LM-26 machine in reaching scientific breakeven by 2026. |
||
Feasibility of Angular Multiplexed Light Storage in Cold Rb-87 Ensemble | Douglas Florizone | ||
Abstract: | Quantum memories play an essential role in quantum communication and quantum information processing. One important performance parameter of a quantum memory is the capacity to selectively store and retrieve independent photonic modes with different degrees of freedom. We sought to demonstrate this multimode capacity in a cold Rb-87 ensemble, specifically through angular multiplexed wave vectors. Using three distinct angles to initiate storage and retrieval of light pulses in the ensemble, we assessed if we could achieve angular selectivity necessary for future quantum memory applications. While increasing angular separation limits coherent interaction, we find that even a large angle produces small coherent readout preventing complete angular selectivity. Furthermore, we find attempting optical read-out at different angles erases the spin-wave coherence as a function of power, preventing us from realizing angular multiplexed quantum memory in our setup. | ||
Measuring the System Temperature of the BVEX Radio Telescope | Peter Simpson | ||
Abstract: | Determining the system temperature of the BVEX telescope is necessary to distinguish a signal from noise when collecting data. The system temperature is the sum of all noise attributed to the telescope receiver, which can come from the system itself, or external radio frequency interference sources. A blackbody calibration source was constructed by heating a microwave insulator to a set temperature using a series of resistors. This blackbody has a brightness temperature equal to its thermal temperature. Using a PID loop, the temperature of the calibrator could be held within 0.25K of the set point. By recording the integrated power measured by the receiver at two different temperatures over two 25-minutes periods, the Y-factor could be determined by the ratio of the powers at the high and low blackbody temperatures. From the Y-factor, the system temperature was determined, but found to be statistically insignificant. In order to improve the accuracy of the T-sys measurement, a power spectral density was taken, and based on the corner frequency the hot and cold power measurements were each split up into 150 10-second segments to minimize 1/f noise. Matching each hot segment with each cold segment produced 22,500 Y-factors in a symmetric multimodal distribution between 1.00 and 1.07, with the highest peak at 1.032. However, the resulting system temperatures were statistically insignificant, distributed uniformly between 200 and 1000 Kelvin. | ||
Very Long Baseline Interferometry Timing for Localizing Fast Radio Bursts | Alyssa Atkinson | ||
Abstract: | Very Long Baseline Interferometry (VLBI) is a tool used in radio astronomy to obtain high resolution imaging and retrieve precise localizations of sources. It involves interferometry: coherently adding signals from radio telescopes separated by a long distance (the longer distance the better spatial resolution). Due to the geometric nature of the problem the localization precision relies on accurate synchronization of clocks at each VLBI station. For this reason, very stable atomic clocks are used testing the timing precision of which is crucial for the success of VLBI experiments. The baseline of interest in this work is the ARO (Algonquin Radio Observatory) – CHIME (Canadian Hydrogen Intensity Mapping Experiment) – that currently runs an experiment aimed to localize Fast Radio Bursts with sub arcsecond precision. At ARO, the atomic clock was recently replaced, requiring validation of its timing. In this project we used radio pulsars, with known locations, to perform VLBI and track the residual time delays over time, which tells us how precisely we can make localizations. This project is a stepping stone in an effort to localize Fast Radio Bursts (FRBs). | ||
Effects of Irradiation-Induced Disorder on High Tc Superconductors | Sanjit Patil | ||
Abstract: | A charge density wave (CDW) is a phase of metals where electrons form a standing
wave, with spacial oscillations of the electron density. Breaks in the symmetry of the lattice often accompany the standing waves. CDWs are a characteristic phase in almost all cuprates, especially when hole-doped at 12.5%, where superconducting critical temperature (Tc) is suppressed, and CDWs are most pronounced. Evidence suggests that CDWs compete with the superconducting phase in many cuprates. This experiment used gamma radiation to induce a disorder in the crystal lattice of La2-xSrxCuO4 (LSCO) and La1.6-xNd0.4SrxCuO4 (Nd-LSCO) samples where Tc is at its highest and lowest. The samples were irradiated up to 1006 Gy using a Cs-137 source. The Tc of irradiated and unirradiated samples will be measured and compared to determine the effect of disorder in the crystal lattice. |
||
Tagging Emerging Jets with a Graph Neural Network | Randon Hall | ||
Abstract: | The Standard Model of particle physics has been remarkably successful at predicting experimental results at the Large Hadron Collider (LHC), but fails to explain the observed dark matter density of the Universe. Many Beyond Standard Model (BSM) theories predict that dark matter may be comprised of particles in a dark sector with self-interactions similar to the strong force in the SM. A potential signature known as an emerging jet, which is a topologically unique jet containing many displaced tracks, would be a strong indication for the existence of this dark sector. One effective approach to identifying these potential EJ signatures is with a machine learning algorithm known as a graph neural network (GNN). GNNs are a type of neural network that specialize in dealing with graph-type data, which jet data can be conveniently structured as. The GNN was trained on simulated collision events, mimicking those at ATLAS, to learn to recognize the EJ signature that could be present. One crucial step when developing a new tagger is to estimate the number of false positives with methods known as background estimation. In this poster, we present preliminary results on a novel background estimation method applied to the GNN. The results demonstrate the method’s capabilities to accurately predict the level of background present in the signal region. | ||
Feature extraction of helical-shaped magnetotactic bacteria using convolutional neural networks. | Sujit Patil | ||
Abstract: | Understanding the properties of bacterial motions is crucial for microbiological research, and an interesting small Reynolds number hydrodynamics problem. Magnetotactic bacteria (MTB) are a group of bacteria that synthesize magnetite crystals and can navigate along an external magnetic field. They are of interest for many technical applications such as nanobots and drug delivery systems. Our lab aims to study the motion of a well-studied species of helical-shaped MTB, Magnetospirillum magneticum AMB-1. In this study, we compare a convolutional neural network (CNN) and the traditional extraction method of frame-by-frame image analysis to extract key parameters relating to the MTB cell from light microscopy images: helix radius, pitch and thickness, cell length, direction, and rotation. Preliminary results show the CNNs ability to discern some features to a great degree of precision and at a greater speed than traditional image analysis. This will allow us to establish the precise relationship between rotational and translational velocity for cells of different lengths. | ||
Monte-Carlo Event Misreconstruction in SNO+ | Cameron Bass | ||
Abstract: | SNO+ is a liquid scintillator-based neutrino experiment consisting of a large spherical acrylic vessel (AV) surrounded by photo-multiplier tubes (PMTs). Above the AV, a cylindrical neck extends upwards. The position of events in the SNO+ detector is reconstructed based on the arrival times of photons detected by the array of PMTs. This analysis investigates the properties of Monte-Carlo (MC) high-energy events originating from the neck which are mis-reconstructed within the AV. Event mis-reconstruction introduces an additional source of background that can interfere with the analysis. By understanding the properties of these events, we can apply necessary position and energy cuts to reduce this background. To investigate this, Monte-Carlo simulations were conducted to study particle interactions in the neck of the SNO+ detector. The fraction of events wrongly reconstructed as occurring within the spherical part of the AV was determined as a function of the true position of the particle. The optical properties of the acrylic that makes up the neck, which are not as well known as those of the acrylic that makes up the AV sphere, were also varied in the simulations, showing their impact on event mis-reconstruction. Events with a true position outside of the PMT array were found to be more likely to be mis-reconstructed within the acrylic sphere, and the scattering length in the neck acrylic was found to have a significant impact on the number of events thus mis-reconstructed. | ||
Fibre Polarization Compensation and Control for Confocal Microscopy and Beyond | Hamish Johnson | ||
Abstract: | The silicon T centre contains long-lived electronic and nuclear spin qubits and emits photons in the telecom O-band, suitable for integration with existing communication networks. Its ability to integrate quantum bits with photonic systems in scalable silicon technology presents an opportunity for advancing both quantum computation and long-distance satellite and fibre communication. To better understand the optical properties of this defect, we investigate the polarization dependence of its optical transitions. In this work, I present the design and implementation of a system that provides full control over the polarization state of a laser, compensating for optical components and enabling polarization dependent measurements. The T centre’s excited state, a bound exciton with a 3/2-spin hole, consists of mixed-spin eigenstates. By probing these states with polarized light, we aim to resolve the dipole orientation of the optical transitions. I describe the construction of a free-space polarization control setup used to study single T centres through photoluminescence excitation in confocal microscopy. This system reveals a polarization-dependent response, offering insight into the T centre’s dipole structure. Our ongoing work in bulk silicon allows a complete characterization of the three-dimensional dipole. The optical dipole orientation is an important parameter to optimize the operation of T centre qubits in quantum processors and networks. | ||
Development of a Surface Enhanced Raman Spectroscopy Sensor for the Detection of Trace Concentrations of Small Molecules | Ann Drakes | ||
Abstract: | Detection of trace concentration of small molecules in various medias is highly necessary for many applications and uses, creating a need for the development of sensitive and reliable detection methods. Surface-enhanced Raman spectroscopy (SERS) offers a promising solution due to its ability to provide fingerprint-like spectra of molecules, enabling precise identification even at trace concentrations. In this study, we present the fabrication and testing of a SERS sensor tailored for the detection of small molecules such as benzoyl peroxide, gibberellic acid and salicylic acid.
Thin-film gold nanostructures were fabricated using the pulsed laser deposition technique. The thin film-like gold nanoparticle substrates, crafted through a top-down approach, exhibit elevated stability, sensitivity, enhanced accuracy, and heightened precision in measurement compared to colloidal solutions. Their controlled composition, thickness, and other properties facilitate uniform interaction between analyte and substrate, ensuring dependable and consistent performance across experiments. The sensor’s performance was evaluated by testing different concentrations of small molecules. The SERS sensor exhibited high sensitivity and specificity, allowing for the detection of small molecules at trace levels. Furthermore, the efficacy of the SERS sensor was validated through real small molecule testing. Small molecule matrices were analyzed, and the sensor successfully detected and identified these molecules present in these complex matrices, demonstrating its practical utility in various applications. Overall, our results highlight the potential of SERS-based sensors as powerful tools for the rapid and reliable detection of small molecules, ensuring the integrity of certain medias. |
||
Counting Massive Galaxies in the Early Universe with JWST | Michelle Denny | ||
Abstract: | Massive galaxies are a very useful instrument to study galaxy evolution, especially in the early universe. Due to observational and modeling challenges, distant galaxies that are more massive than the Milky Way are difficult to quantify. In particular, lower mass galaxies with certain observational features can masquerade as massive galaxies in large photometric catalogs. This misclassification, coupled with limitations such as insufficient photometric sampling and modeling systematics, impact the number density of galaxies at the high mass and high redshift ends of the spectrum. In this project JWST galaxy catalogs and statistical methods (e.g., Simulation-based inference (SBI)) were used to estimate the misclassification rate as a function of time in massive galaxy samples. The initial catalog contained a wide range of ground-based telescope data across multiple wavelengths, while JWST acquired infrared data otherwise inaccessible from Earth, which is invaluable in the study of massive galaxies. Several spectral features exist within those bandwidths that help to precisely classify the redshifts of these massive galaxies. We observed how stellar population parameters (specifically stellar masses and redshifts) are derived from photometry using a photometric redshift-fitting software, EAZY, which refined the interpretation of the modeling results. This was supplemented with SBI, a likelihood-free inference method, to perform a faster prediction of these stellar population parameters accurately for catalogs without JWST data. Using these methods, we are able to determine the miscalculation rate of our sample by estimating the scatter of the predicted stellar masses and redshifts with respect to the actual parameters. Through this research, better models for galaxy evolution can be developed to more accurately study the physics of the early universe. | ||
Mapping HOD parameters to linear galaxy bias using galaxy-galaxy lensing | Akshay Prasad Ramasubramanian | ||
Abstract: | Working under the ΛCDM model, I use a simulation-based model based on the AbacusSummit, which predicts the excess surface mass density (ESD) around foreground galaxies Through galaxy-galaxy lensing (GGL). GGL is part of weak gravitational lensing, which describes subtle changes in the size (convergence) and shape (shear) of distant galaxies due to the deflection of light by foreground lens galaxies. In the simulation-based model, the foreground galaxies are modeled using a Halo Occupation Distribution (HOD), which populates dark matter haloes with central and satellite galaxies based on the haloes’ mass. Moreover, it predicts the stellar masses of galaxies, which can be used to investigate different types of galaxies. The overall idea of my work is to find a mapping between the parameters of the HOD and a simpler linear galaxy bias parameter. To determine the linear galaxy bias, we fit an analytical description of the ESD using a Python library called “pyccl”, to the predicted ESD of the simulation-based model. Since the linear galaxy bias is not accurate on all scales, my first task is determining the scales that fit the analytical model to measure data accurately. After defining the appropriate scales, I will fit the analytical model to the measured data to find the best-fitting linear galaxy bias. Lastly, I will derive a mapping between the HOD parameters and the fitted galaxy bias using a neural network emulator. In follow-up work, I might extend the analytical model to higher-order bias terms and validate whether this helps to fit more minor scales. | ||
Quadratically Convergent Self-Consistent Field (QC-SCF) Orbital Optimization | Shuoyang Wang | ||
Abstract: | In quantum chemistry, the electronic structure of molecules is analyzed using quantum mechanics, typically through the solution of the N-electron Schrödinger equation. Due to the lack of an analytical solution for this equation, numerical methods are required for approximation. Various quantum chemistry methods have been developed to improve the accuracy of these approximations for different systems. In this work, we implemented the quadratically convergent self-consistent field (QC-SCF) algorithm to numerically solve the N-electron Schrödinger equation and adapted the results for orbital optimization, leading to more accurate solutions. The QC-SCF algorithm employs an exponential parametrization of the wave function through a local unitary transformation, which allows the energy to be expanded using the Baker–Campbell–Hausdorff (BCH) series truncated at the second order. This approach facilitates gradient descent optimization to minimize the energy function and achieve optimized orbitals. During orbital optimization, configuration interaction (CI) wave functions are constructed based on the QC-SCF optimized orbitals using PyCI, with initial guesses iteratively refined for further optimization. A key advantage of this method is its use of second quantization formalism, a standard in modern quantum theory, along with a guaranteed convergence to tackle the challenging problem of orbital optimization. | ||
Next Generation Precision Mass Measurements with Radioactive Ions | Daphene Wen | ||
Abstract: | D. Wen, J. Ash, J. D. Cardona, O. Kester, F. Maldonado Millan, R. Simpson, A.A. Kwiatkowski TRIUMF’s Ion Trap for Atomic and Nuclear Science (TITAN) is best known for its precision mass measurements of short-lived radionuclides. These measurements have many implications for our understanding of nuclear structure, nuclear astrophysics, tests of the Standard Model of Particle Physics, insights into neutrino physics, and more. TITAN’s Electron Beam Ion Trap (EBIT) is a device designed to produce highly charged ions (HCI) to achieve the highest precision for tests of the Standard Model. The EBIT uses a high density, magnetically compressed electron beam to knock off bound electrons and increase the charge state of trapped radioactive ions. A new electron source has been designed to improve control of the electron beam, decrease fringing effects of the magnetic field to increase beam density, and improve mechanical operations. The electron source is in the process of being commissioned. The status of the new electron gun and its potential for new science opportunities will be discussed. | ||
Experimental Measurement of Time Reversal Symmetry Breaking in 37K Beta Decay | Chaitanya Mandar Luktuke | ||
Abstract: | Sakharov’s conditions for baryon symmetry require sources of CP or CPT violation outside current accepted experimental observations. We aim to probe one such source of time symmetry violation using radiative Beta decays from 37K with a four-momentum outgoing state, by studying the spatial asymmetry of the outgoing particles. TRIUMF’s Neutral Atom Trap (TRINAT) is uniquely equipped to study the angular distribution of all decay products from spin-polarized beta-emitting isotopes produced by the Isotope Separator and Accelerator (ISAC) facility. We aim to study the spin 3/2+ to spin 3/2+ transition in the K-37 nucleus, which decays partly via the Fermi operator and emits a gamma ray in the process, resulting in a single-vertex decay interaction. Harvey, Hill and Hill established the theoretical framework modelling this four-point interaction, and Gardner and He built on this framework defining a spatial decay-observable that violates T-symmetry. Further work established a finite observable asymmetry due to standard model interactions. Specialised Gamma, Ion, and Beta detectors help us detect and reconstruct the outgoing momenta from which we extract an asymmetry that we can compare to Standard and non Standard Model predictions. Similar experiments conducted previously employ instead spin-polarized atoms to probe T-asymmetric terms, so are sensitive to different microscopic physics. | ||
Optimizing a Cyclotron’s Chamber Geometry: A Study of Cyclotron Particle Dynamics and Radiation in Weak-Focusing Magnetic Environments | Alhasan Shnoot (and Ana Torres Bejarano) | ||
Abstract: | The performance of a cyclotron particle accelerator relies heavily on the design of its chamber, which must have its dimensions optimized while taking account for safe operation. This study investigates the principles of weak-focusing magnetism and its application in inducing betatron particle focusing within the chamber. We describe for shielding against radioactive emissions, including gamma and neutron rays, to ensure the safety and efficacy of the accelerator. By examining these facets, we derive functional relationships that inform the geometric parameters of the cyclotron chamber, specifically its height, radius, and thickness. This research contributes to the design of more efficient and secure cyclotron systems, enhancing their operational capabilities while adhering to safety standards.
Engineering a Mini Cyclotron Crew – EMC² is an undergraduate student-run initiative dedicated to constructing a functional cyclotron. This initiative bears an educational focus as it focuses on hands-on experience, while fostering an environment in which students from diverse backgrounds and disciplines collaborate in solving various problems in physical instrumentation: participants engage in practical problem-solving and apply theoretical knowledge in real-world contexts. By integrating theoretical knowledge with practical application, EMC² aims to inspire the next generation of scientists and engineers |
||
Review of the time-dependent Majorana mean-field theory | Sahib Singh | ||
Abstract: | In recent years, several candidate materials have been proposed to realize Kitaev physics. However, the microscopic spin Hamiltonians describing most of these materials present non-Kitaev terms. To correctly capture static and dynamical quantities in the presence of additional terms, general approaches have been developed. In this poster presentation, we review the time-dependent Majorana mean-field theory, a method put forward to compute dynamical quantities. We detail its application to obtain time-dependent spin correlators for the Kitaev-Heisenberg model. | ||
Varieties of The Schelling Model | Marlyn Mwita | ||
Abstract: | The Schelling Model of Segregation is a foundational framework for understanding how collective behaviors can diverge from individual incentives through individual interactions. However, this has resulted in numerous variants of the model, creating a fragmented body of literature. To address this, we revisit Schelling’s original model and employ statistical mechanics tools to analyze 54 rule variants through Agent-Based Modeling. Key parameters were varied, including the search algorithm, movement criteria, and the prospecting time of dissatisfied agents, revealing that the diverse rule variants collapse into just four distinct phase diagram classes, primarily differentiated by agent rationality. The order of phase transitions was characterized by examining fluctuations via the model’s convergence time, and the kinetic and dynamic factors driving each part of the phase diagram were investigated. | ||
Probing Defects in Cross-Linked Polyethylene Pipe Using Infrared Imaging | Isaac Mercier | ||
Abstract: | Cross-linked polyethylene (PEX-a) pipes are increasingly being used to replace metal pipes for water transport and heating applications, driven by cost savings and ease of installation. However, there are challenges related to their long-term performance, i.e., lifetime, as well as the environmental impact of stabilizing molecules added to the formulations to achieve superior pipe performance. In this study, we used high-resolution infrared (IR) imaging to quantify changes to the chemical fingerprint of PEX-a pipes by exposing them to recirculating water at elevated temperatures and pressures. To initiate defects, we scored the inner surfaces of the pipes along their length using a sharp tool. Pipe segments were then exposed to high-pressure water at 90ï‚°C for extended periods of time. Periodically, the pipe segments were removed, a microtome was used to obtain thin, 200-micron-thick axial slices of the pipe and high-resolution infrared images of the slices were collected using a Bruker LUMOS II IR microscope. In an IR image, each pixel corresponds to an IR spectrum which contains information on the different functional groups in the polymer matrix and the stabilizing molecules added to the pipes. This allowed us to analyze the spatial distribution of changes to the functional groups with aging. As the pipes were aged, we observed the formation of a region of increased carboxylate absorbance (a hot spot) near the apex of the scored defects. We characterized the spatial extent and absorbance of the hot spot as a function of aging time, and found that the maximum value of the absorbance increased monotonically with aging time over a period of 28 days. Interestingly, this distinctive chemical signature associated with the aging of the scored defects differs from that observed for cracks that have formed in pipes during in-service use. | ||
Probing the Cosmos with 21cm: The Spectral Imprints of Ionization Cavities | Mateus Kaiber Buse | ||
Abstract: | The 21-cm line of neutral hydrogen (HI) is a useful tool for studying the Universe during the Epoch of Reionization (EoR). Currently, the 21-cm signals associated with the properties of neutral hydrogen (HI) gas and ionization morphology are computed using an integrated line-of-sight opacity. However, these approaches fail to fully capture the interactions between line emission and continuum radiation in radiative transfer, as well as variations in thermal and dynamical properties along the line of sight. To address these issues, we perform explicit radiative transfer calculations using the cosmological 21-cm radiative transfer (C21LRT) formulation. Additionally, we examine the effects of fully ionized HII bubbles on the 21-cm signal, exploring how their properties influence signal characteristics. We conduct calculations along single and multiple rays across bubbles at various redshifts. The single-ray C21LRT results reveal that the observed spectral features are primarily shaped by the 21-cm line profile, which, in our model, is influenced by turbulent velocity and gas temperature. Additionally, the multiple-ray calculations demonstrate that the 21-cm signals differ depending on whether the ionization front is stationary or expanding. This variation is due to the finite speed of light, which distorts the perceived shape of HII bubbles from the perspective of a distant observer. Importantly, our findings show that key features at the transition between ionized and neutral gas are not captured by the simplified optical depth method. Overall, this work represents a step toward accurately modeling 21-cm signals from the EoR. |