Oral Presentations will be 15 minutes followed by a 3 minute Q&A, and 2 minute set-up period for a total of 20 minutes. Students will speak one at a time to an audience of approximately 30 other delegates and will be required to prepare slideshows to present their work. We request students to have their slideshows prepared for submission by 23:59 PM on Thursday, 24th October, 2024 (tentatively).
Particle Physics 1Morning Session |
|||
Time | Title | Presenter | Room |
10:00 | High Precision Photomultiplier Tube Non-Linearity Tests for the MOLLER Experiment | Tavleen Kainth | HEBB 212 |
Abstract: | The MOLLER experiment seeks to achieve high precision in measuring the weak mixing angle using parity-violating electron-electron scattering. The main electron detector for this international collaboration is being constructed at the University of Manitoba. This detector utilizes photomultiplier tubes (PMTs) to detect Cherenkov light from quartz bars due to scattered electrons. Understanding the linearity of the PMT output is crucial for precisely measuring the parity-violating signal from the weak interaction. This research evaluated the non-linearities of the PMTs in three and four stage PMT-base configurations. We measured the integral and differential non-linearity values for 95 PMTs with both bases using a bench-top setup. Our results indicate that both 3-stage and 4-stage configurations meet the MOLLER specifications, with no statistically significant difference between in non-linearity. This finding confirms that both configurations can be used without compromising the high precision required for testing the weak mixing angle in the MOLLER experiment. These results enhance the reliability of PMT linearity for the next generation of parity-violating electron scattering experiments. | ||
10:20 | Standard Model Mixology: Exploring Up-Down Quark Mixing through Ab-Initio Nuclear Theory | Benjamin Scully | HEBB 212 |
Abstract: | Studying the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix remains a key area of research in nuclear and particle physics as any departure from unitarity indicates the existence of physics beyond the standard model. The largest source of error in determining this is the up-down quark mixing element Vud — determined through superallowed beta decay transitions. To extract Vud from experimental results, two corrective factors are required from nuclear theory: the nuclear-structure dependent radiative correction delta_NS, and the isospin breaking correction delta_C. Unfortunately, examining this in heavier nuclei has been limited to phenomenological methods due to the intense computational cost of the nuclear many body problem. The In-Medium Similarity Renormalization Group (IMSRG) approach to ab-initio nuclear theory makes calculating these broader ranges of nuclei from first principles feasible, allowing us to either reaffirm or challenge the widely accepted phenomenological values. In this work I implement the electroweak operator to allow calculation of delta_NS and examine the current capabilities and areas of improvement towards calculating delta_C and compare them to current phenomenological literature values. Each of these corrective factors are two of the largest sources of error in the currently accepted value of Vud; this work thus aims to greatly reduce the uncertainty around the value of the up-down quark mixing element. | ||
10:40 | JetPointnet: A Machine Learning Approach to Cell-to-Track Attribution in the ATLAS Experiment | Joshua Himmens | HEBB 212 |
Abstract: | The ATLAS detector records proton collisions at the Large Hadron Collider, where protons are accelerated to 99.999999% of the speed of light to probe our understanding of physics at the high energy frontier. Critical to the analysis of ATLAS data is event reconstruction, where we associate calorimeter and tracker signals to determine which particles caused them, with how much energy, and through what process. A key challenge in this process is particle flow, where we attempt to relate energy deposits in the calorimeter to tracks from the inner detector. One promising approach to particle flow is using a PointNet machine learning architecture for this association. While PointNet models show significant promise on simplified data sets, they struggle with the complexity of more realistic ATLAS data. This talk demonstrates the challenges of this segmentation using a PointNet model and the transfer-learning approaches that have been developed to improve performance in the complex collision environment of the LHC. | ||
11:00 | Ab initio Combined Neutrino Mass Limits from Neutrinoless Double Beta Decay | Taiki Shickele | HEBB 212 |
Abstract: | Neutrinoless double-beta decay is a hypothetical second-order weak process that involves the decay of a pair of neutrons into two protons and two electrons. Observation of this decay will point to a Majorana nature of the neutrino, lepton number violation, the absolute mass scale of the neutrino and possibly further new physics. Crucially, constraining neutrino masses from current and next-generation experiments requires the use of nuclear matrix elements, which until now have only been obtainable through phenomenological methods. However, recent developments have made these matrix elements accessible through ab initio nuclear theory.Using a Bayesian approach, we combine likelihoods from leading experiments to obtain a global neutrino mass constraint from ab initio nuclear matrix elements. Furthermore, utilizing a simple Poisson counting analysis, we construct the combined sensitivity reach from several next-generation experiments. Limits are also computed for a heavy sterile-neutrino exchange mechanism instead of the standard light-neutrino exchange, which arises in many theories beyond the Standard Model. These constraints allow us to determine the total physics reach of all neutrinoless double-beta decay experiments combined, better informing our exclusion reach on the absolute mass scale of the neutrino. | ||
11:20 | Electron Beam Optics in the e-Linac for DarkLight | Angela Sabzevari Gonzalez | HEBB 212 |
Abstract: | DarkLight is a fixed target electron beam experiment set to run in the fall of 2024 at TRIUMF. The experiment will search for the “Dark Photon”, a particle theorized to be the boson that couples Standard Model Matter and Dark Matter. Due to previous experiments it is believed the particle will have a mass of about 17 MeV so that is where DarkLight will focus. The e-Linac is the electron linear accelerator at TRIUMF which will provide the beam for the experiment with energies ranging between 10-55 MeV after some upgrades. As the beam interacts and passes through the target it’s angular spread grows to sizes that cause high radiation risks in the experimental hall. To counteract this we will use electromagnetic quadrupoles (EMQs) and permanent magnetic quadrupoles (PMQs) to focus and defocus the electron beam. The exact positions of these magnetic quadrupoles is crucial for the experimental set up and ensuring radiation safety. Using simulation software including Geant4 and TRANSOPTR the scattering angles of the beam was determined for specific target materials and beam energies. Those scattering angles were then used to determine the necessary magnet positions and strengths on the beam line that would allow the beam to reach the beam dump shielding without excess radiation. Three different magnetic setups were found given the magnets and space available. These set ups will work for 1.0 um Tantalum, 1.0 um Carbon, and 2.0 um Carbon targets at almost all of the various beam energies. | ||
11:40 | Luminescence Testing of Materials in the Scintillating Bubble Chamber Experiment | Alex Hayes | HEBB 212 |
Abstract: | The Scintillating Bubble Chamber (SBC), a dark matter direct-detection experiment, aims to detect bubbles produced by dark matter interactions in a superheated liquid target. To achieve this, the chamber is monitored by cameras and illuminated by flashing LEDs. Outside the target volume, silicon photo-multipliers (SiPMs) capture scintillation light which can be used to identify non-dark matter interactions. The SiPMs, however, are also sensitive to the LED light. This presentation describes the LED light response of various materials used in the SBC volume, allowing us to estimate the scintillation detection uptime while the LEDs are flashing: an important result for the operation strategy of SBC. |
Afternoon Session |
|||
Time | Title | Presenter | Room |
2:00 | Exploring the Directionality of Particle-Induced Events for the NEWS-G Dark Matter Detector | Keiran Nicholson | HEBB 212 |
Abstract: | New Experiments With Spheres-Gas (NEWS-G) is a direct dark matter detection experiment using Spherical Proportional Counters with light noble gases to search for very low-mass Weakly Interacting Massive Particles (WIMPs). In this regime, the detector will also be sensitive to the so-called solar neutrino floor, which mimics dark matter signals and cannot be shielded against. To distinguish dark matter from neutrino signals in the future, we need detectors sensitive to the direction of incoming particles because the neutrino background is coming from the sun’s direction. The NEWS-G detector, consisting of a 135 cm diameter copper sphere, is equipped with an 11 multi-anode read-out sensor to detect particle-induced events in a near-radial electric field. In this work, we present the potential for directional sensitivity through the signal generated by this new multi-anode sensor design. By using computational and modelling tools, we developed a method to retrace the events from data given by the NEWS-G detector. In addition, we studied the optimal conditions of the detector to ascertain the directionality of detected events. This simulation will provide a solid framework that can be used to compare with experimental data to discriminate between the solar neutrino background and dark matter candidates for the NEWS-G experiment. | ||
2:20 | Antiproton Interactions at Belle II | Karalee Reimer | HEBB 212 |
Abstract: | The Belle II particle physics experiment has collected over 500/fb of e+e- collision data since it began operating in 2019. This study is contributing to an ongoing analysis to measure the neutron antineutron production cross-section in e+ e- collisions with data collected by Belle II. A critical component to this analysis is to evaluate the data-to-simulation agreement for neutron and antineutron detection in the Belle II calorimeter, which is constructed from CsI(Tl) scintillation crystals. Large samples of proton, antiproton, and pion tracks with momentum from 1.5 – 4.0 GeV/c are selected from Belle II data and simulation samples to measure the probability for their nuclear interaction in Belle II’s CsI(Tl) electromagnetic calorimeter. These results are important to quantify the data to simulation agreement for nucleon hadronic interactions in CsI(Tl) for Belle II analyses with final state neutrons. | ||
2:40 | Non-Prompt and Fake Lepton Background Analysis for Doubly Charged Higgs Boson Searches | Denaisha Kraft | HEBB 212 |
Abstract: | The ATLAS detector can be used to search for doubly charged Higgs bosons that are produced via vector boson fusion (VBF). In this search, the doubly charged Higgs bosons decay into a pair of W bosons with the same electric charge. Events where the W bosons decay leptonically into electrons or muons are selected in this analysis. However, other processes in the detector can mimic this final state, which can allow them to enter the signal region (SR) of the measurement. These backgrounds include non-prompt leptons originating from secondary decays, as well as fake leptons from misidentified objects in the detector. In this study, Monte Carlo (MC) simulations of background processes are used to identify the type of fake and non-prompt leptons present in the SR. This helps with selecting appropriate data control regions to estimate the type of fake and non-prompt leptons more accurately, leading to better differentiation between true signal and background processes. | ||
3:00 | Top Quark Pair and QCD Multijet Backgrounds in the hh → bbbb Boosted Analysis with ATLAS | Josephine Brewster | HEBB 212 |
Abstract: | The Standard Model of particle physics (SM) has been shown to be a great tool for making predictions at the subatomic scale. Many parameters of the SM have been tested experimentally and have agreed remarkably well with predictions. One prediction that has yet to be tested is the shape of the Higgs potential. A next step in probing this is measuring the Higgs self-coupling modifier. Using data from the ATLAS detector at the LHC, the goal of the ATLAS di-Higgs to four b quark (hh4b) analysis is to constrain the Higgs self-coupling modifier. This analysis studies events with two Higgs bosons each decaying to two b quarks. A significant background for this process is top quark pair production. In the study presented in this talk, top quark pair backgrounds in the hh4b boosted regime are studied using simulated data. A transformer neural network (GN2X) bb tagger is applied to top quark jets in these top quark pair events. The amount of top quark pair events is found to be significantly reduced compared to the previous analysis which exploited variable-radius track jets for Higgs to bb identification. The truth-level composition of top quark pair events passing the GN2X tagger is also studied and compared to QCD multijet simulations. | ||
3:20 | Two Coupled Modes in a Trenchcoat: Demystifying Non-Hermitian Absorption and Transmission Spectra | Bentley Turner | HEBB 212 |
Abstract: | The dynamics of a small open subsystem in contact with a larger, homogenous energy or particle reservoir can be well-approximated by constructing an effective Hamiltonian for the subsystem alone, which becomes non-Hermitian due to gain and loss terms describing the environmental interaction. These so-called non-Hermitian systems have attracted significant research interest because they exhibit unique physical phenomena including exceptional points, the skin effect, and nonreciprocal transmission, but properly characterizing the basic properties of the systems still proves to be a difficult task in some cases. A common tool used to characterize resonant systems is a transmission (S21) measurement, which consists of passing energy into the system via an external driving at a range of frequencies, and identifying the resonances of the system where the signal strength transmitted through the system are maximal or minimal. However, complications arise when using S21 to characterize resonances of non-Hermitian systems. In this work, a model system of two electrical resonators coupled via a dissipation channel is studied to demonstrate that well-known non-Hermitian frequency merging effects can be masked from appearing in transmission resonance measurements despite being evident in the absorption resonances of the open system and to elucidate the mechanism of this disagreement. The system was modelled analytically, the transient response described using an effective non-Hermitian Hamiltonian, and the steady-state response using a scattering matrix approach. The model was also constructed and probed experimentally with a transient response measured using a function generator and oscilloscope, and in steady-state with a vector network analyzer. The theoretical models and experimental results both provide evidence for a phase interference effect whereby transmission spectra display uncoupled resonances, and the non-Hermitian coupling effect is only visible in absorption spectra – an understanding which will aid in the design of measurement schemes for future research on non-Hermitian resonant systems. | ||
3:40 | Building an Optical Model for the Super-Kamiokande 20-Inch Photomultiplier Tube | Matthew Marzano | HEBB 212 |
Abstract: | The Super-Kamiokande experiment (also known as Super-K) is a neutrino and nucleon decay detector located 1km underground inside Mount Ikeno in the Kamioka Mine, Japan. The detector is a cylindrical tank 39.3m in diameter and 41.4m in height, filled with approximately 50 kilotonnes of ultra-pure water, and lined with over 11000 20-inch-diameter photomultiplier tubes (PMTs) used to detect the Cherenkov radiation produced when high-energy charged particles pass through the water. The charge and timing of all PMT signals are combined to reconstruct and analyse the events which caused them. In order to study events in Super-K, we need to precisely understand the response of the detector’s PMTs. As such, it is necessary to have a robust model of the PMT under varying magnetic fields, and its response to light at different orientations. A crucial component of any full PMT model is the optical model, which has been simulated in Geant4. Here, we describe the methods used to construct an optical model for the Super-K PMT, the results we’ve obtained, and next steps for the future. | ||
4:00 | PIONEER: A Next-Generation Experiment for Rare Pion Decay Precision Measurement | William Chow | HEBB 212 |
Abstract: | The PIONEER experiment is a next-generation initiative designed to test lepton flavor universality (LFU) with unprecedented precision through the study of rare pion decays. In its first phase, PIONEER seeks to improve the measurement of the branching ratio between pion decays to positrons and anti-muons. This ratio, sensitive to new physics, currently exhibits a precision in the Standard Model that is approximately an order of magnitude higher than the best experimental results to date. Any statistically significant deviation between the experimental measurement and the Standard Model prediction would indicate the possibility of new physics beyond the Standard Model, particularly through LFU violation. To achieve this goal, PIONEER will employ advanced detector technologies aimed at minimizing systematic uncertainties while enhancing statistical precision. This improved sensitivity will allow PIONEER to explore potential deviations that could provide insight into LFU violations and uncover evidence for new particles or interactions. By focusing on achieving a new level of precision, PIONEER’s first phase will offer a unique opportunity to test the limits of the Standard Model, potentially revealing discrepancies that challenge our current understanding of fundamental particle interactions. |
Particle Physics 2Morning Session |
|||
Time | Title | Presenter | Room |
10:00 | Machine Learning for Enhanced Energy Analysis of Point Contact Ge Detectors | Meghan Naar | HENN 318 |
Abstract: | Point-contact germanium detectors are widely used for particle detection and energy analysis, particularly in the search for neutrinoless double beta decay. In these detectors, energy from an event is recorded as a step-like pulse, where the energy is proportional to the difference in charge between the ‘top’ and ‘bottom’ steps. However, a resistor in the detector causes the top step to decay exponentially back to a baseline, complicating the accurate measurement of the event’s true energy. My talk will focus on the development of machine learning models designed to reconstruct the original step-like pulses from their decayed forms, enabling improved energy analysis. This will include an introduction to machine learning, types of neural networks, and generative AI. I will introduce deep learning-based Variational Autoencoders (VAEs) trained to reconstruct pulse shapes from decayed versions, using only essential information stored in their latent space. Additionally, I will discuss how they were trained in order for their application to real detector data for accurate energy reconstruction in the search for neutrinoless double beta decay. | ||
10:20 | The IRIS Facility with Solid H2/D2 Targets at TRIUMF for Reaction Studies of Rare Isotopes | Gabriel Gorbet | HENN 318 |
Abstract: | To probe the frontiers of knowledge on the short-lived isotopes near the neutron drip line, understand reactions of astrophysical significance, or discover hitherto unknown properties of nuclear shell structure, one requires novel instrumentation. The IRIS facility features solid hydrogen and deuterium targets, yielding high areal density while remaining geometrically thin (50-100 μm), providing a better-defined reaction vertex point. The poster will describe the solid H2/D2 target, its cooling and vacuum conditions that were investigated for the robust operation of the target. In order to identify the various reactions channels originating from interactions of the rare isotope beams with the H2/D2 target, the IRIS facility utilizes particle identification using a segmented silicon semiconductor detector array and a CsI(Tl) inorganic crystal array, forming a ΔE-E telescope. The energy and angle recorded by the array provides knowledge on the reaction kinematics. Precise calibration of this telescope is necessary to extract the nuclear excitation spectra. The poster will also detail current investigations into the angular and potentially proton number dependence of the detector gains. | ||
10:40 | The Compton Slope Parameter and the Compton and Two Photon Spectrometer | Laura Hubbert | HENN 318 |
Abstract: | The nuclear Equation of State (EOS) represents the interactions of dense nuclear matter and is used to study astrophysical objects like neutron stars. It is directly correlated with neutron skin thickness, which is a phenomenon describing the layer of outermost neutrons observed to envelop large nuclei. The most accurate way to study neutron skin thickness is through Parity Violating Electron Scattering (PVES); however, such experiments contain error contributed by the Beam-Normal Single-Spin Asymmetry (BNSSA), which describes the small, normal component of a particle beam’s polarization due to any slight bends within the path of the beam itself. The BNSSA is proportional to the Compton Form Factor (CFF), which is in turn proportional to the Compton Slope Parameter (CSP). In order to constrain the nuclear EOS, neutron skin thickness must be measured. Currently, the extraction of the neutron skin of heavy nuclei is hindered by the 20% error in theoretical predictions of the BNSSA due to assumptions about the CSP, which depends on the energy deposited into a target particle during elastic Compton scattering. This is responsible for a systematic error in the nuclear EOS, reducing its accuracy and hereby forcing its conformity with the precision of the CSP. To reduce the error of this parameter, it is crucial to separate elastic and inelastic Compton scattering events, which can be discerned with the high energy resolution NaI detector, CATS (Compton and Two Photon Spectrometer). CATS detector calibrations, tests, and runs were executed during July and August of 2024. Such data collected included cosmic ray data and in-beam data, however, data of Compton scattering from Carbon-12 is anticipated to be collected in the near future. The experimental results will be cross-examined with a Geant4 software detector simulation, allowing the extraction of the CSP and the hopeful reduction in its uncertainty. | ||
11:00 | Radon Trapping and Laser Spectroscopy Systems for the NEWS-G Dark Matter Experiment | Jordan Kurtzweg | HENN 318 |
Abstract: | A key challenge in gaseous dark matter detectors but also all dark matter experiments, is the presence of radon, which creates background noise. The New Experiments With Sphere-Gas (NEWS-G) is a direct detection dark matter experiment using spherical proportional counters filled with light noble gases mixed with methane. The hydrogen-rich methane target makes the NEWS-G detector extremely sensitive to very low-mass dark matter particles. A new radon trap has been developed at the University of Alberta, but it also absorbs methane. This study determines the optimal amount of methane needed to saturate the radon trap and restore the initial methane concentration, measured using a laser absorption spectroscopy (LAS) system. The research findings will enhance the sensitivity and reliability of the NEWS-G experiment, particularly in detecting low-mass dark matter particles, as accurate methane concentration is crucial for calculating dark matter exclusion limits. | ||
11:20 | Investigating Nuclear Shell Evolution at the Proton Drip-Line through the 20Mg(d,p)21Mg Reaction at IRIS | Zachary Saunders | HENN 318 |
Abstract: | Exotic nuclei, those characterized with a large asymmetry in their number of protons and neutrons as well as exceptionally short half-lives, are transforming our understanding of the nuclear force. Current nuclear physics models do a fantastic job explaining the properties of stable nuclei, but fail to predict exotic structures seen in nuclei far from stability. One manifestation of this is seen in nuclear shells. The nuclear shell model predicts that isotopes with certain numbers of protons and neutrons, those corresponding to complete shells, should be more bound than other neighbouring isotopes. However, these known shell closures are showing signs of vanishing in some exotic nuclei. One of these shell closures is at neutron number N=8, which has been found to vanish at the neutron drip-line. The goal of this project was to find direct evidence as to whether the N=8 shell closure persists in the extremely proton rich exotic nucleus 20Mg. The one neutron transfer reaction 20Mg(d,p)21Mg was performed at the IRIS facility at TRIUMF using a radioactive beam of 20Mg accelerated to 8.5MeV/u impinging on the novel solid deuterium target. Depending on which of the excited states of 21Mg were populated by the reaction we aim to determine whether there is a weakening of the conventional N=8 shell closure. The presentation will describe the experiment and preliminary observations. | ||
11:40 | PICO-500 Muon Veto System | Emma Greenall | HENN 318 |
Abstract: | PICO-500 is a next generation bubble chamber being built to search for dark matter. With the increased active volume, PICO-500 will probe further into the low cross-section space than its previous iterations. PICO-500 is set to be installed in the CUBE hall at SNOLAB. Despite extensive shielding from the rock above SNOLAB, cosmogenics can still reach the detector. Muons are especially problematic as they can generate neutrons which create signals that look like dark matter in the bubble chamber. To combat this a water tank containing 48 PMTs will surround the detector to look for Cherenkov radiation from muons. To ensure this veto works properly, a calibration/monitoring system consisting of fiber optics and LED drivers was built and tested this summer. The synchronization of LED flashes, the fibre attenuation, the fiber propagation delay, and the pulse duration were determined. A finished electronics box is undergoing final modifications and will be ready to use in PICO-500. |
Afternoon Session |
Time | Title | Presenter | Room |
2:00 | Alpha Emission Monitoring for Nuclear-Spin Polarization and Collinear Laser Spectroscopy | Élyse D’Aoust | HENN 318 |
Abstract: | Alpha particles are a serious internal health hazard due to their short range and high linear energy transfer. With the increasing demand for delivering alpha-emitter radioactive ion beams (RIBs) to the polarizer beamline at the Isotope Separator and Accelerator facility (ISAC) at TRIUMF, it is becoming increasingly important to monitor the resulting alpha-emitter contamination inside the beamline, in a safe and efficiency way. To address these challenges, this project involved the installation of an alpha detector close to the major beam dump inside the polarizer beamline. This method enables direct and convenient monitoring of the alpha radioactivity during and between experiments, while avoiding the complexity and uncertainty associated with the gas sampling methods previously used. This information is necessary to determine the required cooldown time between experiments, as well as to ensure safety before opening the beamline. After the installation of the detector, an alpha spectrum from the beam dump was obtained. Three main decay chains were identified: Ac-226, Th-228 and Ra-226. These were traced back to previous experiments that were run in the beamline using beams of Ac-225, Ac-226, Ac-228 and Ac-229 in Aug 2022. This successful identification validates the reliability of the installed detector for alpha emission monitoring, offering valuable insights into the radioactive decay processes that occurred within the beamline. The implementation of this alpha detector provides an efficient tool for monitoring alpha radioactivity in future experiments. | ||
2:20 | Investigating the ionizing effects of a high energy proton beam on thermocouples, and potential applications of this effect including thermocouple-based beam-position monitors | Zeest Fatima | HENN 318 |
Abstract: | Temperature monitoring is essential in high-power target operations at ISAC-TRIUMF to ensure target material integrity and optimize isotope production. This study investigates the use of type K thermocouples for temperature measurement under intense proton beam conditions. The proton beam, with energies up to 500 MeV, interacts with various target materials, such as uranium carbide, used for producing rare isotopes via the ISOL method. During experiments, thermocouples placed downstream from the target exhibited significant variations in temperature readings, particularly when placed directly in the path of the proton beam. The temperature readings showed abnormal behaviors, such as negative values of -270 F and polarity-independent shifts, attributed to ionization effects from the beam overlaying the Seebeck Affect. Various experiments were carried out to study and quantize the charge deposition effects of the proton beam. Furthermore, applications of this effect, such as prototyping a beam position monitor using the ionizing effects of the beam are also being considered. This talk will provide valuable data on the behavior of thermocouples under high energy proton irradiation, contributing to improved temperature measurement techniques in high-energy particle accelerator environments. It will also talk about applications of this affect, in devices such as beam position monitors. | ||
2:40 | Total Muon Capture Rates from Ab-Initio No-Core Shell Model | Diego Arturo Araujo Najera | HENN 318 |
Abstract: | Muon capture is an electroweak process in which a negatively charged muon is captured by the nucleus of a muonic atom. The muon, initially stopped in the outer shells of the atom, first cascades all the down to the lowest 1s orbital, and can then either decay or interact with a proton through the exchange of a W boson, resulting in the emission of a neutrino and atomic number reduction by one. Because of the large mass of the muon, muon capture shares the magnitude of its momentum transfer (100 MeV), as well as hadronic currents with neutrinoless double beta decay, a rare decay that, if observed experimentally will imply physics beyond the standard model. Therefore, an accurate theoretical treatment of muon capture, which can also be studied experimentally, can enlighten our understanding of physics involved in neutrinoless double beta decay. The high momentum transfer, shared by the final nucleus and the neutrino, also allows for transitions up to highly excited nuclear states. While the total muon capture rates are well known experimentally, they present a daunting task for nuclear theory. Here, we present ab initio predictions for total muon capture rates on C12 to B12 using the No-Core Shell Model, a first principles method to solve the many body Schrodinger equation. To overcome the computational challenges that were previously faced by Jokiniemi et al. when attempting to calculate total capture rates, we employ Lanczos algorithm that is an effective tool to capture total transition strengths. | ||
3:00 | Shim coils and their importance in measuring the electric dipole moment | Modeste Katotoka | HENN 318 |
Abstract: | Precise measurements of the neutron electric dipole moment (EDM) could result in a discovery of a violation of particle-antiparticle symmetry, and of new physics beyond the standard model. The TRIUMF Ultracold Advanced Neutron (TUCAN) collaboration is preparing an experiment to measure the neutron EDM with an accuracy of 1 × 10−27 ecm, a factor of 10 better than the world’s previous best, published in 2020. Neutron motion in the EDM cells in the presence of magnetic field inhomogeneity results in a false neutron EDM signal. Shim coils are used to characterize and reduce magnetic field inhomogeneities. The shim coils must make the field inside the EDM measurement cells very homogeneous, σ(Bz ) < 40 pT in field of Bz = 1 µT, in order to meet the requirements of the experiment. I will present my design studies of a shim coil system for the TUCAN EDM experiment, which is based on square coils placed on the walls of the magnetically shielded room surrounding the EDM cells. I will also report on the construction of the coils, which was completed in August 2024. The coils will be installed and used starting in October 2024, as a part of the commissioning of the magnetically shielded room and precision atomic magnetometer systems. I will further present our plans for the operation of the coil system. | ||
3:20 | Alpha-induced Proton Recoils in the SNO+ Neutrino Experiment | David Drobner | HENN 318 |
Abstract: | SNO+ is an operational multipurpose liquid scintillator based neutrino detector located at SNOLAB in Sudbury Ontario, with a primary goal of searching for neutrinoless double beta decay. Due to the high sensitivity required to make measurements of the type that SNO+ does, thoroughly understanding the experiment’s background profile is paramount. As such, when backgrounds that did not fit the existing model appeared during a previous data-taking phase of the experiment it was cause for concern. The most promising theory to explain some of these unknown backgrounds is the scattering of protons in the scintillator molecules by alpha particles. It was hypothesized that the differing quenching factors for the scintillation light from alpha particles and protons may explain the larger amount of light than previously expected. This talk discusses the writing of a simulation to test this hypothesis with a particular focus on how the physical phenomena at work were modelled, as well as the performance and scalability of simulations of particle interactions. The talk will also touch on the use of parallel computing (with a particular focus on General-Purpose GPU computing) to accelerate these types of simulations, as well as the applicability of techniques used in this simulation to other types of physical simulations. | ||
3:40 | Resparking the Neutron Lifetime Debate with PENeLOPE | Dinel Anthony | HENN 318 |
Abstract: | The neutron lifetime is a very important parameter, influencing Big Bang nucleosynthesis, and the calculation of V_ud, the up-down quark mixing coefficient of the CKM matrix. The two main methods to measure the neutron lifetime are the beam method and the bottle method. For the beam method one counts the number of neutron decays in a well defined volume with known neutron density by collecting and detecting decay electrons or protons. In the bottle method neutrons are stored for different predetermined durations and the surviving neutrons are counted. The error bar of the current world average of 878.4 ± 0.5 s is inflated by a factor of 1.8 due to large discrepancies between the two methods. The Precision Experiment on the Neutron Lifetime Operating with Proton Extraction (PENeLOPE) combines both methods with the goal of achieving a precision goal of 0.1 s. This is done by storing ultracold neutrons (UCN) in a magneto-gravitational trap, and detecting protons from neutron decay in a proton detector. After predetermined durations, the remaining neutrons are counted in a neutron detector. The combination of these two detection systems and near lossless magnetic storage provides the possibility of reaching unprecedented precision. Currently, PENeLOPE is being installed at TRIUMF, downstream of the TUCAN UCN source. This work aims to discuss the near-future plans for PENeLOPE at TRIUMF | ||
4:00 | Pushing the Limits of Dark Matter Search Experimentation With PICO-500 | Emery Pattison | HENN 318 |
Abstract: | PICO-500 is a next-generation, tonne-scale, C3F8 based, spin-dependent bubble chamber detector. The goal of PICO-500 is to directly detect Weakly-Interacting Massive Particles or WIMPs, particles that could explain the dark matter phenomena. Dark matter makes up approximately 85% of all matter and is an essential ingredient in the universe’s formation. When the construction of PICO-500 is complete, it will operate at SNOLAB, a 2 km deep underground research laboratory within an active nickel mine near Sudbury, Ontario. With the help of world-class background mitigation through extreme cleanliness and the roughly 2000 meters of water equivalent overhead shielding from cosmic rays that SNOLAB provides, PICO-500 will be the most sensitive spin-dependent dark matter search experiment in the world. This talk will give an update on the status of PICO-500 and will also highlight some of the ongoing, personal and collaboration-wide, analysis efforts which will help guide PICO-500 to success. |
Astrophysics & Particle PhysicsMorning Session |
|||
Time | Title | Presenter | Room |
10:00 | Resolving PSOs in image from ground-based telescope | Ravleen Kaur | HEBB 216 |
Abstract: | Many astronomical questions require measuring stars in crowded fields like star clusters and the Galactic Plane. Often, crowding issues are solved by acquiring high resolution space-based telescope images. However, given that space telescope time is limited, astronomers want to make full use of lower resolution ground-based telescope images. Our research focuses on a seemingly simple yet crucial question: how close can a pair of stars be while still being resolvable by a ground-based telescope? To answer this question, we used a Hubble Space Telescope catalogue of the globular cluster Messier 2 to find star pairs. Next, we try to find the same pairs in the Sloan Digital Sky Survey (SDSS) catalogue. We define the “overlap flux” to quantify the effect on a star’s measurement from its neighbours and find that it is just as important as the brightness of stars and the distance between them in determining a star pair’s detectability. Then, we compare the completeness of star pairs in SDSS to a semi-analytic prediction based on simulated images. Because the prediction assumes that a pair of interest is isolated, we find that this prediction must be adjusted as a function of overlap flux to account for the effect of neighbouring stars. | ||
10:20 | Truncated Star Formation and Ram Pressure Stripping in the Coma Cluster | Ariel Broderick | HEBB 216 |
Abstract: | We use over 100 galaxies from the MaNGA (Mapping Nearby Galaxies at Apache Point Observatory) survey to study the physical drivers of star formation quenching in the Coma cluster. We split our sample of Coma galaxies into low- and high-R subsamples based on their projected distance from the center of the cluster. We then measure specific star formation rate (sSFR) radial profiles for both Coma subsamples as well as a control sample of non-cluster field galaxies. We find that both the low- and high-R Coma samples have reduced sSFRs relative to the control. Compared to the high-R sample, galaxies within the core of the Coma Cluster have sSFR profiles that fall off more steeply with galactocentric radius. We then apply a toy model based on slow-then-rapid quenching via ram pressure stripping. We find that this model is able to reproduce both the difference in sSFR profiles between field and Coma galaxies, as well as the difference at large galactocentric radius between low- and high-R Coma galaxies. These results demonstrate that ram pressure stripping plays a significant role in quenching star formation in the nearest massive galaxy cluster. | ||
10:40 | Finding Pluto from Pixels | Aran Karagonlar | HEBB 216 |
Abstract: | Binary Trans-Neptunian objects (TNO) act as tools for determining the history of our solar system, such as constraining the migration of Neptune. While these binaries can be found with space telescope follow-up, observing all TNOs with space telescopes is not feasible, thus we must find them through alternative means. By forward-modeling pixelated images as the sum of two PSFs, we can use the images taken from ground-based telescopes to determine the probability that each object is a binary and measure the locations of the binary components. Currently these methods are being tested on the well known TNO binary, Pluto and Charon. | ||
11:00 | A Database of Transit Timing and Duration Variations Induced by Systemic Proper Motion | Dao Thai Uyen Pham | HEBB 216 |
Abstract: | The formation and evolution of planetary systems remains one of the major puzzles in astronomy and planetary science. While thousands of planets outside the Solar System (exoplanets) have been detected, contributing to understanding exoplanet sizes, compositions, and system architectures, observing long-term planet-planet and planet-host star interactions remains challenging. Observing such interactions enables the testing of planet evolution theories and can constrain interior planetary structure and underlying planetary physics. Now, with about 30 years of data, detecting such interactions has become possible. Exoplanet transits are particularly important to studying their orbital evolution. Such events occur when the exoplanet crosses in front of its host stars, causing periodic, measurable dips in the host star’s brightness. By monitoring variations in the timing and duration of these dips over decades, it is possible to test for evolutionary effects such as planet-host star tidal evolution, or precession due to planet-host star, planet-planet, or general relativistic interactions. However, as the sensitivity toward detecting planetary orbital evolution becomes greater with time (due to having more and more recurring transits for a system), it is important to consider phenomena that can cause apparent changes to transit observations. Such phenomena, such as the motion of the star and planet relative to Earth, may mask or mimic the true orbital evolution. This research utilizes the Exoplanet Archive and Gaia databases to determine the maximum expected apparent orbital variations, and with that information to conduct statistical analyses that reveal the extent of how high systemic proper motion can bias long-term exoplanet transit observations. A comprehensive database of transit timing and duration variation effects will be available for the public to aid in future research. | ||
11:20 | Eyes on the Skies with Thunderbird South: First Insights into Exoplanetary Observations and Space Sustainability | Chantal Hemmann | HEBB 216 |
Abstract: | The University of British Columbia’s new southern observatory, Thunderbird South, is a half-meter telescope situated 1700 meters above sea level in the arid Rio Hurtado valley, six hours north of Santiago de Chile. In operation, the telescope is used for on-demand observing and to conduct follow up observations of exoplanet transits, contributing to several ongoing programs. In addition, Thunderbird South’s wait-and-catch method of tracking satellites and space debris allows for the assessment of their impact on ground-based observations as well as broader Earth-space sustainability concerns. Following its installation in October 2023, the observatory underwent a thorough commissioning phase involving testing, calibration, and procedural optimization to establish a baseline for functional protocols and to verify the telescope’s efficacy. I present the intricacies of these developments and report the anticipated data quality and resolution, offering insight into the possibilities and limitations for exciting future projects conducted with Thunderbird South. | ||
11:40 | Asteroseismic Models of Eclipsing Binary Star Eridani | Rafael Rezende Freire | HEBB 216 |
Abstract: | We present asteroseismic models of Eridani, an eclipsing binary star. Stellar parameters were determined by modeling light curves using the Physics of Eclipsing Binaries software package (PHOEBE). Once suitable fits to the eclipses were found, the theoretical light curve was subtracted from TESS light curves, leaving only the pulsational variability. Theoretical frequencies were generated using evolutionary models from the Modules for Experiments in Stellar Astrophysics software package (MESA) alongside the stellar oscillation code GYRE. Comparison of models and observations showed an abundance of g-mode frequencies, but the best fitting models are much younger than previously found for this star. |
Afternoon Session |
Time | Title | Presenter | Room |
2:00 | A Three-Humped Camel: Assessing a Peculiar Supernova | Sam Lakerdas-Gayle | HEBB 216 |
Abstract: | Core-collapse supernovae are the energetic explosions of a high-mass star at the end of its life. Different wavelengths of light from supernovae tell us different information about the supernova ejecta, the surrounding gas, and the star’s mass loss before explosion. Supernovae typically peak in their magnitude a few weeks after explosion and gradually dim over the next few months, yet we have found a handful of supernovae that have extremely bright radio emission over a year after explosion. We explore an especially interesting supernova that shows three peaks in its light curve, including a late-time radio rebrightening. While the source of this late-time rebrightening is currently unknown, we study the intriguing possible sources of this emission. We use spectral analysis of the radio emission from this supernova to constrain its physical parameters. By doing so, we can pinpoint the source of the late-time emission as either supernova ejecta interacting with a dense shell of circumstellar material, the emergence of an off-axis jet, or the emergence of a pulsar-wind nebula. | ||
2:20 | Adapting GSpyNetTree from LIGO to Virgo: Evaluating and improving glitch classification for gravitational wave detection | Airene Ahuja | HEBB 216 |
Abstract: | In order to detect gravitational waves (GWs), extremely precise detectors measure the ratio of change in the detector arm length to the total arm length, or the “strainâ€. The strain data also includes stationary background noise, and bursts of non-stationary noise called glitches. These glitches are problematic as they can mimic GWs in time-frequency morphology, which necessitates sophisticated tools that can separate the two. The Gravity Spy Convolutional Neural Network Decision Tree, or GSpyNetTree, is a machine learning signal-vs-glitch classifier that takes in a time-frequency spectrogram of a potential GW signal and determines the probability that it is consistent with one or more classes of glitches, a true GW signal, or background noise. While GSpyNetTree has achieved over 96% accuracy on LIGO glitches, the Virgo detector recently joining the current observing run means it is important to determine how GSpyNetTree performs on Virgo data. After some preliminary analysis, we tested a subset of Virgo glitches consistent with commonly observed “worst offendersâ€. Of the 3102 glitches predicted to be in the Koi Fish class, 174 or 5.6% were misclassified as GWs. 540 of the 2559 glitches in the light scattering class, or 21.1%, were misclassified as GWs. However, many of the glitches in this latter category exhibited high frequency characteristics inconsistent with light scattering, which suggests GSpyNetTree needs a new class to better account for these glitches. This project is ongoing, and the next steps will be to test a random sampling of Virgo glitches, retrain GSpyNetTree with a new “high frequency†glitch class, and then test another random sampling to determine how GSpyNetTree’s performance changes. We hope to reach a high enough accuracy to apply this tool to Virgo data during this current observing run. | ||
2:40 | Low-Metallicity Stars in the Large Magellanic Cloud: Tracing the Early Conditions of Star Formation | Nicholas Zaparniuk | HEBB 216 |
Abstract: | This presentation explores the early stages of star formation and galaxy evolution through the study of extremely metal-poor (EMP) stars in the Large Magellanic Cloud (LMC). Using high-resolution spectroscopic data, we examine elemental abundances, with a focus on stellar nucleosynthesis and delayed r-process enrichment, providing key insights into the role of neutron star mergers in producing heavier elements via r-process enrichment.
The analysis reveals the presence of both r-I and r-II stars, indicating that neutron star mergers may have played a more significant role in enriching the LMC than previously thought. These findings offer some clarity into the distinct chemical evolution of dwarf galaxies like the LMC compared to the Milky Way. The talk will also highlight the absence of carbon-enhanced metal-poor (CEMP) stars and the discovery of the LMC’s first nitrogen-enhanced metal-poor (NEMP) star, raising questions about the unique chemical pathways in this galaxy and how early conditions may have influenced it’s initial star formation. Additionally, we will discuss the broader implications for galaxy formation and evolution, examining what we have been able to learn, exploring future research directions, and addressing the ongoing questions driving current astrophysical research. |
||
3:00 | Smoothed-Particle Hydrodynamics for Astrophysics | Paul Richter | HEBB 216 |
Abstract: | I seek to present a review of smoothed-particle hydrodynamics (SPH) as applied to astrophysical systems, with a focus on broad theoretical components rather than fine-grain mathematics and derivations. Specifically, I will discuss the origins of SPH, some key thermodynamic concepts and assumptions implicit in its implementation, and the artificial components necessary to achieve certain phenomena (namely bulk viscosity). This will be followed by a brief discussion of an SPH code developed in Python with Mateus Fandiño at Thompson Rivers University. I will show the results of simulations done with our code, including the effects of bulk viscosity, temperature, mass, and angular momentum on the behaviour of molecular clouds. Animations of the most relevant and/or interesting behaviour from our research will be included. This will conclude with a note on the direction of our future work, namely translation into C or another fast compiled language, and larger scale simulations using Digital Research Alliance of Canada resources. Computational methods for numerical integration (Runge-Kutta-Fehlberg v. Leapfrog) and root-finding (Newton-Raphson v. Bisection) will also be mentioned, time-permitting. | ||
3:20 | Studies on the temperature dependent drift velocity for the HELIX Drift Chamber Tracker | Gabrielle Barsky-Giles | HEBB 216 |
Abstract: | HELIX (High Energy Light Isotope eXperiment) is a ballon experiment designed to measure abundance of cosmic ray isotopes from hydrogen to neon, with a particular interest in abundances of beryllium isotopes. HELIX aim to provide essential data to study the cosmic ray propagation in our galaxy. The Drift Chamber Tracker (DCT) in HELIX is a multi-wire gas drift chamber designed to measure the position of incident cosmic rays. It is located inside a magnet, bending the trajectory of incoming particles through 72-layers of tracking, enabling the measurement of the momentum of incoming particles. I will present my study on maximum drift distance on a wire-by-wire analysis of the DCT data. As well as an analysis on the temperature dependence of the drift velocity, and temperature gradient throughout the detector over the flight. | ||
3:40 | Early Time Dynamics in Heavy Ion Collisions | Bryce Friesen | HEBB 216 |
Abstract: | The study of systems that are strongly interacting plays an important role in the development of our understanding of the physical world. Quantum chromodynamics (QCD), the theory that governs the interactions of subatomic particles on a nuclear level, is strongly interacting. It has been known for years that this property of QCD is responsible for many of the unusual and interesting features of nuclear matter. Over the past 25 years, expensive experimental programs involving particle accelerators have been developed, for example at Brookhaven National Laboratory and CERN, to study strongly coupled systems empirically. Quark gluon plasmas are produced in relativistic nuclear collisions, and many exciting experimental results have become available. Mathematically, the study of strongly coupled systems is very difficult. Many of the standard calculational techniques that have been developed by generations of physicists, largely in the context of quantum electroÂdynamics (QED), are not applicable. We study the dynamics of the gluon fields that exist at very early times after a collision of relativistic heavy ions. We find analytic solutions to the Yang-Mills equations using an expansion in proper time. These colour electric and magnetic fields are used in a Fokker-Planck formulation to study the momentum broadening of a hard probe traversing the plasma. We show that the early time gluon fields produce significant momentum broadening. | ||
4:00 | Study of electron transport in 2D semiconductors using pump probe differential reflection techniques | Fangzheng Qu | HEBB 216 |
Abstract: | This study focuses on measuring the exciton lifetime in two-dimensional layered tungsten selenide (WSe2). Over the summer, the primary objective was to capture a differential reflection signal resulting from this phenomenon using pump-probe spectroscopy. This technique employs ultrafast, mode-locked light pulses, which are split into two pathways: the pump beam, which excites electrons in the sample by providing higher intensity, and the probe beam, which, with lower intensity, measures resultant changes in the material’s reflectivity. Various optical devices were employed to fine-tune the pulses’ power, polarization, wavelength, and other properties. Temporal changes in the signal were detected by adjusting the optical path length of the pump beam with a computer-controller delay stage. Spatial signal changes across a two-dimensional area were captured using a motor-driven multi-mirror setup, known as a GALVO, which directed the probe beam across different regions of the sample. The data obtained was then used to measure the differential reflectivity and plot its spatial distribution over time. By the conclusion of this study, a functional pump-probe setup was established, capable of exciting bulk WSe2 samples with multiple layers, detecting signals, and generating spatial distribution plots. Future research will focus on studying electron transport and relaxation in a variety of 2D semiconducting materials alongside optimizing the setup for enhanced performance. |
Biophysics & Medical PhysicsMorning Session |
|||
Time | Title | Presenter | Room |
10:00 | Using single-molecule and bulk techniques to explore the biophysics of secondary structure formation in supercoiled DNA plasmids | Alexis Hilts | HEBB 218 |
Abstract: | The DNA in our cells exists out of equilibrium, with its level of supercoiling (amount of right-handed twist) constantly fluctuating as it is used for gene expression, regulation, and replication. Supercoiling induces a torsional strain in the molecule that can cause structural transitions in the DNA, such as unwinding of adenine-thymine base pair (AT) rich regions to become single-stranded, forming sites where proteins can bind to regulate gene expression. The equilibrium thermodynamics describing the theoretical structure of a DNA plasmid (circularized DNA molecule) under different conditions is well understood and provides a good model for predicting secondary structures, though challenges remain in describing the non-equilibrium evolution of a system such as a dynamic supercoiled domain of DNA within a cell. This theoretical model was used to inform experimental decisions and compared to results. In this study, a circular DNA plasmid with two AT-rich unwinding sites was used as a model system. A stemless molecular beacon (single-stranded DNA probe with no hairpin) with a fluorophore and quencher along with Convex Lens-induced Confinement (CLiC) single-molecule microscopy was used to detect the single-stranded regions in each DNA plasmid and uncover the distribution of structural states. These results were compared to findings from bulk assays: fluorescence plate reader kinetic curves and chemical foot printing with agarose gels. These techniques together provide an accurate method to detect secondary structures in supercoiled DNA. Further, they give a platform for studying these molecules at different conditions to gain a deeper understanding of secondary structure formation, and eventually better understand the physical basis of gene regulation. | ||
10:20 | Validating a Rectal Surface Mapping Method for Improved Dose Accumulation Accuracy in Prostate Cancer Radiotherapy | Jacob Smit | HEBB 218 |
Abstract: | Purpose: Prostate cancer radiotherapy requires precise targeting to minimise organs at risk exposure. Standard dose accumulation methods use deformable image registrations (DIR) to map multi-treatment dose onto the same structures, which does not account for anatomical changes like rectal fill or gas. Dose surface maps (DSMs) offer a promising alternative by focusing on consistent rectal regions despite deformations. This study validates a method of parameterizing the rectum in cylindrical coordinates—depth (s) and azimuth angle (φ) —to ensure a point at (s, φ) corresponds to the same anatomical location despite deformations, enabling more realistic dose accumulation and improved patient care. Methods: A silicone rectum phantom, embedded with ten thermoluminescent dosimeters (TLDs) and film was used for validation. Four CT scans were acquired, each with random phantom deformations simulating inter-fraction motion (Figure 1). The difference in TLD positions from the first scan was calculated, and predictions were made by comparing the same (s, φ) points on subsequent scans. Finally, deformable registration was performed using Eclipse software, and predicted TLD positions were compared. Results: The Pearson correlation coefficient between the TLD position displacement in absolute coordinates and the parameterized s, φ coordinates was -0.48 (p=0.0073), indicating that rectum phantom deformations did not significantly increase the error in the predicted TLD positions using the parameterization method. The average simulated inter-fraction motion of the TLDs was 8.7 mm from their position on the initial scan. Despite this, the average error in the parameterized coordinate prediction was 2.2 mm, limited by scan resolution. Conclusions: The parameterization method shows promise as an alternative for dose accumulation method, potentially reducing DIR errors. Validation with the silicone phantom supports its accuracy, though rectal wall anisotropy presents a challenge. Next steps include measuring accumulated doses at each TLD and film, comparing to DSM sums, and DSM correlation with patient outcome. | ||
10:40 | Development of Alpha Spectrometry for Diagnostics of TAT on Cancer Cells in Vitro. | Sidney Shapiro | HEBB 218 |
Abstract: | Targeted Alpha Therapy (TAT) using Actinium-225 (²²⁵Ac, t₁/₂ = 9.9 d) is considered a promising cancer treatment due to its potent alpha emissions, which induce DNA double-strand breaks in cancer cells. However, challenges persist in accurately detecting and investigating the behavior of ²²⁵Ac-labeled pharmaceuticals in tumors. In this study, the uptake efficiency of ²²⁵Ac-CROWN-TATE in AR42J pancreatic tumor cells was investigated using a novel method based on alpha spectroscopy at atmospheric pressure. A Bio-sample Alpha Detector (BAD), equipped with a Si PIN photodiode with an active area of 18 × 18 mm and operated at a bias voltage of +70 V, was employed. The amplified output was then fed into a 12-bit multi-channel analyzer (MCA), with data processed by a Raspberry Pi 5. Front-end software developed using CERN’s ROOT framework was used to read the MCA data and generate plots of event counts versus channel number. SpectroMicro XRF sample cups were utilized for cell sample preparation. These cups, with dimensions of 23.9 mm × 18.4 mm (outer diameter) × 19.4 mm (height), include a Mylar foil layer with a thickness of 2.5 µm at the bottom. The sample cup was positioned on the detector at a distance of 100 µm. AR42J mice pancreatic tumor cells were incubated with ²²⁵Ac-CROWN-TATE inside a normal plate for 1 hour and then transferred to the foil cups. The alpha spectrum was taken for 900 s, reaching a statistical uncertainty below 1%. Preliminary results demonstrate measurable uptake of ²²⁵Ac-CROWN-TATE by the cells, as observed through distinct spectral differences between labeled cells, reference samples, and unlabeled ²²⁵Ac. Furthermore, the decay product ²¹³Bi was detected exiting the cells, indicating partial retention of the radiolabel. These findings suggest that this new method holds potential for advancing studies of TAT and can be further refined through collaboration. | ||
11:00 | Characterizing sources of noise in a mobile magnetic brain imaging system | Carson Leslie | HEBB 218 |
Abstract: | Magnetoencephalography (MEG) is a non-invasive technique that detects magnetic fields produced by human brain activity. The Biosignal Lab is the first in Canada with a mobile magnetic shield, which uses novel sensors. Noise sources in these new recordings are not well understood. Quantifying and reducing the magnetic noise are imperative to increasing the signal-to-noise ratio during human recordings, improving our brain mapping ability. This study evaluates potential sources of noise in our system. We hypothesized that the noise is caused by vibrations of the shield or from poor attenuation of external fields. Field mapping quantified the residual vector field inside the shield. The effect of vibration was tested by increasing the magnetic field gradient via Helmholtz coil. Sensor data were collected alternating between the standard or enhanced gradient and with the coil coupled either to the shield or sensors. The relationship between gradient strength, sensor coupling, and noise was investigated. To test the attenuation of external fields, concurrent field data were collected inside and outside the shield. Data were compared based on their amplitude and coherence spectra. Amplitude spectra for our sensors show 10 peaks between 10-150 Hz with unknown source. The coil increased residual dBz/dz gradient from 0.3 nT/cm to 4.5 nT/cm (x15). Activating the coils increased the peak amplitudes by an average of 0.4 ± 0.7 dB or 0.5 ± 1.5 dB depending on how they were coupled. Some peak frequencies occurred in the amplitude spectra both inside and outside the shield. High coherence occurred at power line frequencies and 16-20 Hz. There is little evidence that gradient strength has a direct effect on magnetic noise, irrespective of coupling method. There was some evidence for a shared signal recorded inside and outside the shield. Further investigation is needed to identify the sources of noise in our data. | ||
11:20 | Development of a Treatment Planning Software for a Kilovoltage Radiotherapy System | Jacob Atkinson | HEBB 218 |
Abstract: | Radiotherapy planning is a balancing act between accuracy and speed. Analytic models provide faster dose calculations than Monte Carlo and superposition/convolution methods, often at the expense of accuracy. For low-energy X-rays (20–400 keV), we can approximate the dose as an exponential decay with depth. With this model, a fast treatment planning system and optimization algorithm for beam placement were designed for a dual robot kilovoltage (kV) external beam delivering radiation from a large selection of polar and azimuthal angles. Prior to the development of the treatment planning software, Python packages were created to process patient computed tomography (CT) datasets and patient structure files. A user interface was produced to compile the workflow. To start, the user specifies the number of beams for planning. These beams are equally distributed in polar angle theta while keeping the azimuthal angle phi fixed. The value of a cost function for this configuration is calculated, rewarding dose to the planning target volume (PTV) and penalizing dose to organs at risk (OARs). A single beam is then changed to a random combination of theta and phi, and the cost function is recalculated and kept if the new cost is smaller. This is repeated for a number of iterations specified by the user or until the criteria for convergence is reached. This software was tested for two different cases: a cubic solid water phantom with spherical water contours drawn inside and a prostate patient case. In both tests, the optimizer converged to plans that targeted the PTV while minimizing dose in the OARs in roughly 30 and 40 minutes, respectively. The results show a working proof of concept for a kV dual robot treatment planning system. Future work includes further improvements to the accuracy of the dose model and increasing the efficiency of the optimization routine. | ||
11:40 | Impact of Acceptance Angle on Image Quality for Fourier Rebinned Monte Carlo-Generated PET Data | Evelyne Hluszok | HEBB 218 |
Abstract: | [Introduction] Rebinning algorithms, including Fourier Rebinning (FORE), convert three-dimensional positron emission tomography (PET) sinograms into a stack of two-dimensional sinograms. This may be used to produce numerous sinogram-image pairs from a limited number of scans, which enables efficient training of machine learning algorithms. Rebinned sinograms contain fewer counts compared to 3D sinograms, yielding lower-quality reconstructions. This work investigates the relationship between FORE acceptance angle and image quality, as quantified by the structural similarity index metric (SSIM), normalized mean square error (NMSE), and region of interest (ROI) analysis. [Methods] An anthropomorphic Extended Cardiac-Torso (XCAT) phantom and a quality assurance NEMA phantom were investigated. The Geant4 Application for Tomographic Emission (GATE) was used to simulate data acquisitions in a GE D690 scanner. Software for Tomographic Imaging (STIR) was used to perform data corrections, rebinning, and image reconstruction. Rebinning was performed with acceptance angles ranging from 0° to 12.1°, which is the maximum possible for the D690. Iterative reconstructions were compared to the ground truth images. [Results] ROI analysis of the NEMA phantom found that contrast increased with larger acceptance angles. For a 13 mm diameter ROI, the NEMA contrast increased by approximately 1%. Improvements of up to 5.1% were observed for larger ROIs. Increasing the acceptance angle resulted in diminishing improvements according to SSIM and NMSE. For example, a 120 s data acquisition of the anthropomorphic phantom rebinned with a 2.7° acceptance angle had a SSIM equal to 0.75. Halving the simulation time required the acceptance angle to be increased by a factor of 4.5 to achieve an approximately equal SSIM of 0.76. [Conclusion] In general, image quality metrics improved as the acceptance angle increased, but these improvements were marginal above 6°. These results will be used to inform the generation of a publicly available dataset for PET machine learning research. |
Afternoon Session |
Time | Title | Presenter | Room |
2:00 | Cognitive Flocks: Information Propagation And Threat-Induced Dynamics | Cecilia Soroco | HEBB 218 |
Abstract: | Collective motion is a phenomenon observed in many active matter systems, from flocks of birds to microscopic bacteria colonies. These non-equilibrium systems consist of hundreds and thousands of agents, so it is a wonder that ordered behavior such as flocking and swarming can emerge despite fluctuations in individual behavior. In recent years, the Inertial Spin Model (ISM) was introduced as one of many models attempting to characterize the motion of flocks, specifically by adding an inertia associated with the orientation of each agent, in addition to interparticle interactions. Using this model, scientists were able to reproduce the experimentally observed speed of information propagation throughout a flock performing a coherent turn. In this project, we investigate an extension of the ISM we refer to as the Vision-Based ISM, which injects cognitive interactions between agents in an attempt to more accurately represent the behaviour of real-life flocks. By introducing a predator to the system, we study how varying parameters such as the speed of the flock, the strength of interactions, and the magnitude of information damping affect the flock’s response to a threat. Our simulations show a correlation between the strength of the threat and the speed of information propagation throughout the flock, as well as a correlation between the level of order within the system and the radius of curvature in the centre of mass trajectory while avoiding the threat. In addition, we reveal which starting parameters are more likely to lead to splitting of the flock, and we show that the time required for the flock to reach its closest distance to the predator is independent of the initial velocity. Our conclusions could be used to inform future studies requiring specific behaviour in simulations of flocking particles. | ||
2:20 | Automatically segmenting catheter tips in prostate brachytherapy ultrasound images using a deep learning and feature extraction pipeline | Jessica de Kort | HEBB 218 |
Abstract: | One in eight Canadians assigned male at birth receive a diagnosis of prostate cancer in their lifetime. A common prostate cancer treatment is high-dose-rate brachytherapy, involving the placement of a radioactive source into the prostate and surrounding tissues via multiple catheters (typically 16–18). To guide catheter placement, transrectal ultrasound (TRUS) images are used, but shadowing artifacts and natural bending of the catheters make catheter segmentation challenging and time consuming. Developing intraoperative tools for catheter localization aims to reduce procedure time, thereby minimizing risks associated with prolonged anesthesia. This study combined automated methods for identifying curved catheter tips in three-dimensional (3D) TRUS images with a deep learning and feature extraction approach. The pipeline processed TRUS images using a 3D U-Net architecture to generate prediction point-clouds, which were then refined with a 3D Hough transform and curve-fitting techniques. To train the model, 67 patients were used, and it was tested on 21 patients (343 catheters). The predictions were compared to ground truths, manually identified by medical physicists. Overlap between ground truths and predictions was analyzed, and the Hausdorff distance (HD) for each catheter was determined to quantify the maximum distance from the ground truth and the nearest prediction point. Predicted catheters showed good agreement to ground truths with an average HD of 1.4 ± 0.6 mm. The average tip difference of 3.0 ± 0.4 mm, and the Dice similarity coefficient (DSC) of 0.42 ± 0.17, require further refinement. Our results indicate moderate pixel-wise overlap between ground truths and predictions, suggesting the need for further improvement. The necessity to improve curved catheter tip identification to reduce errors has been shown in these findings. Improving automation in prostate brachytherapy can help minimize associated risks with prolonged anesthesia and reduce human errors in clinical procedures. | ||
2:40 | X-ray Fluorescence Analysis of Nail Clippings to Determine Zinc Levels in Children of India | Jasmine Ouellette | HEBB 218 |
Abstract: | Zinc is vital for proper functioning of the body, and overall health and well-being. Given zinc’s involvement in numerous bodily functions, and the body’s inability to store zinc, regular dietary intake is essential. Inadequate zinc intake can lead to various health problems. Many biomarkers indicative of zinc status have been identified, with the most common one known as serum/plasma zinc concentration. Recently, nails have been of special interest, due to their painless non-invasive collection and their ease of storing and transporting. Although nail zinc concentration is not yet a widely accepted biomarker, it shows promise. So far, its use as a zinc biomarker has been limited, partially due to the expense and time requirements of existing measurement techniques. Our work consisted of measuring nail zinc concentrations of 54 children from India using X-ray fluorescence (XRF). This was done as part of a larger study on quintuply-fortified salt. Due to the limited built-in matrix options of our XRF system, which may be different from the actual material used, the results directly from the XRF are not expected to show the true concentrations. Therefore, the nail clippings were sent to another lab for ICP-OES analysis. This is considered the “gold standard†measurement technique for elemental concentrations. The goal is to compare our XRF results to the zinc concentrations determined by ICP-OES. This could offer a faster and non-invasive approach to identifying zinc status among individuals. Additionally, while the nails of each individual were cleaned before XRF analysis, uncleaned nail samples from six of these individuals were also sent to our lab. A head-to-head comparison of the results from the uncleaned and cleaned samples from the same individuals was done. This allowed us to assess whether cleaning affects nail zinc concentrations. | ||
3:00 | Improving Prompt Gamma Imaging with Compton Cameras for Range Verification in Proton Therapy by Adding a Filter Step | Jenny Zhu | HEBB 218 |
Abstract: | Proton therapy allows for more precise tumor irradiation compared to conventional radiation therapy due to the pronounced Bragg peak in the proton beam’s dose distribution, where the dose maximum along the beam axis is followed by a sharp fall-off. However, small shifts in beam range can cause irradiation of healthy tissues and underdosage of tumors. Range verification methods are crucial for addressing these risks. Some of these methods focus on imaging prompt gamma rays, one of the secondary byproducts of proton beams, which have been shown to correlate with dose distribution. By simulating imaging scenarios with a Compton camera and reconstructing the images using Origin Ensemble (OE), this project aims to improve the quality of reconstructed images by adding a filtering step. We utilized Monte-Carlo simulations to model the behaviour of proton beams with therapeutic energy in a homogeneous polymethyl methacrylate (PMMA) phantom. The OE iterative algorithm, based on Markov-chain Monte Carlo methods, was used to reconstruct images. Gaussian filters were employed at each OE iteration to calculate local event density (Gaussian-OE). For analysis, the images resulting from the 1000th OE iteration were used to investigate filters’ effects on image quality. Quantitative image analysis revealed that Gaussian-OE performed better compared to conventional OE. Gaussian-OE decreased the reconstructed images’ spill-over ratio by 3% from 0.71 to 0.69 and increased the signal-to-noise ratio by 2% from 14.96 to 15.22, indicating Gaussian-OE recovered more intensity in the region of interest where the prompt gamma distribution was simulated. Future investigations to enhance image accuracy can incorporate different choices of a-priori probability functions into Gaussian-OE algorithms and study the effects of different filtering steps. | ||
3:20 | Inductive Losses in Conductive Samples | Tashi Wangchuk | HEBB 218 |
Abstract: | Radiofrequency (RF) probes are vital in Magnetic Resonance research, as they play an essential role in both the transmission and reception of RF signals. We are exploring a non-invasive method to measure conductivity in water-based samples using RF probe loading technique. We’re not doing Magnetic Resonance experiments; our measurements are simpler. The research focused on analyzing the response of an RF probe to varying conductivities in brine solutions by measuring the quality factor of the RF circuit using a network analyzer. This approach allowed us to directly correlate the sample’s conductivity with the observed inductive losses, providing valuable insights into how these losses impact the probe’s performance. The study involved both theoretical modeling and experimental validation of two distinct RF probe configurations: solenoidal probes with cylindrical samples and surface coils with effectively infinite planar samples. For the solenoidal probe, we derived the magnetic field distribution and calculated the associated inductive losses within cylindrical conductive samples. The surface coil experiments focused on the challenges presented by non-uniform magnetic fields and their impact on planar samples. By comparing the theoretical predictions with experimental measurements, we gained a deeper understanding of the relationship between probe design, sample geometry, and conductivity, ultimately contributing to the optimization of RF probe designs for specific applications. Future work will build on these findings, including surface coil measurements at higher frequencies and with reduced coil radii for improved sensitivity. A key focus will be experiments involving real samples, such as measuring the conductivity of fluid flow, to further refine and apply this non-invasive technique. This research not only advances the optimization of RF probe designs but also opens avenues for practical applications in material characterization and biological studies. | ||
3:40 | Solving Self Assembly | Tighe McAsey | HEBB 218 |
Abstract: | Particle self assembly is the transformation of particles in a disordered system into predetermined structures. Self assembly methods make it possible to build structures on the scale of nano-meters, something previously not possible. This technology has already been applied to important problems such as drug delivery and catalyst recovery. My talk will outline my research to develop a new theoretical model for self assembly, which reaches benchmark yields of completed structures exponentially faster. I will present how the complex problem of finding the best arrangements for self assembly can be reduced to a simple mathematical problem, providing deep insights and a simple solution | ||
4:00 | Extending the solution to self assembly | Sushrut Tadwalkar | HEBB 218 |
Abstract: | Self assembly is when particles are synthesized to react with customized bond strengths, such that they assemble themselves into desired structures. This area of study has mainly garnered attention because it allows us to build structures on the scale of nanometers, which is something that was historically not possible, even using machines. Self assembly has a rich variety of applications: from making advances in cancer drug delivery, to building tiny circuits. My talk will focus on extending the central idea from my colleague’s talk “solving self assembly” to bigger, more physically accurate, 3-dimensional systems using simulations, and seeing how they match up with the theoretical approach in smaller systems. |
Quantum & Condensed Matter PhysicsMorning Session |
|||
Time | Title | Presenter | Room |
10:00 | Pump-Probe Analysis of a Phase Transition: A Case Study Using the Ising Model | Keely Ralf | HEBB 314 |
Abstract: | A liquid-liquid phase transition (LLPT ) has been proposed to occur in supercooled water, in which high-density and low-density liquid phases become distinct below a critical temperature located in the supercooled region of the water phase diagram. The LLPT is challenging to detect because it must be observed in the supercooled liquid in the brief time window (< 10 ms) prior to crystal formation. Recent pump-probe experiments provide evidence of the LLPT by heating thin-film amorphous ice samples with a IR laser pulse to conditions close to the LLPT, and then probing the response using x-ray laser pulses. To provide guidance for interpreting the x-ray scattering results from these experiments, we conduct simulations of the Ising model which mimic the sample geometry and pump-probe methodology of the experiment. We model 2D and 3D thin-film Ising systems having free boundary conditions along one direction. Starting from a homogeneous spin-up phase, we abruptly change the temperature and external field to various values in the vicinity of the Ising critical point, monitoring the system properties as a function of time. Our results clarify how the non-equilibrium evolution of the system structure is influenced by proximity to the critical point. In particular, we quantify how a temperature gradient across the sample, believed to be present in the experiments on water, affects the structural evolution of the system. | ||
10:20 | Implementing Pulse Compression Techniques for Four-Wave Mixing | Una Rajnis | HEBB 314 |
Abstract: | Four-wave mixing is a nonlinear ultrafast spectroscopic technique that can be used to study material properties such as the behaviour of mobile electrons in semiconductors. These properties are crucial for the development of semiconductor devices such as thin film transistors and solar cells. In this technique, ultrafast laser pulses are incident on the semiconductor sample, and the properties of the material can be evaluated by the change in the pulses as they pass through it. The pulses used need to be shorter than the phenomena under investigation, and for events such as electron movement, pulse duration needs to be on the order of femtoseconds. Ultrafast Titanium:Sapphire lasers are capable of producing these ultrashort pulses, however when these pulses pass through optical elements such as lenses, the duration of the pulses increases. Various techniques have been developed to “compress” these pulses and reduce their duration. There are various ways to achieve this, including prism compressors, grating compressors, and pulse shapers, but pulse shapers are preferred for this application as they allow for more precise control of the pulse duration. In this work, a pulse shaper was installed in a four-wave mixing setup, and autocorrelation measurements were performed using both the pulse shaper and a prism compressor that was previously installed to compare their efficacy in pulse compression. The pulse shaper was able to compress the pulse from approximately 100 femtoseconds to 33.9 femtoseconds, while the prism compressor was only able to compress the pulse to 44.6 femtoseconds. | ||
10:40 | Fabrication of Quantum Emitters Using Strained Transition-Metal Dichalcogenide Monolayers | Annika Kienast | HEBB 314 |
Abstract: | Transition-metal dichalcogenide (TMDC) monolayers have been recognized as suitable materials for quantum technologies given their unique optical and electronic properties. TMDC monolayers are two-dimensional semiconductor materials consisting of a transition metal atom bonded with two chalcogen atoms. Examples include MoSe₂ and WSe₂. While bulk samples of TMDCs are classified as indirect bandgap semiconductors, monolayers of each of these materials possess a direct bandgap. Direct bandgaps allow for efficient absorption and emission of light, which is necessary for optoelectronic devices. TMDC monolayers can be used to create quantum emitters, which are nanoscale devices that confine excitons, enabling the controlled emission of photons. We have fabricated tungsten diselenide (WSe2) monolayers by mechanical exfoliation, employing a similar method as that which is used to create graphene. We then deposited them onto silicon substrates containing silicon dioxide nanopillars, which induce localized strain that decreases the bandgap of the monolayers to form quantum emitters. The spectra obtained from photoluminescence microscopy are used to confirm regions of monolayers based on the wavelength emitted. TMDC monolayers are one of the solid-state quantum emitter systems which are studied at Dalhousie’s lab in an effort to optimize quantum light sources. Optical pulse shaping of ultrafast lasers will be used to engineer the precise trigger pulses necessary for the quantum emitters formed by straining the TMDC monolayers to become ideal single photon sources. These devices lay the foundation for many areas of quantum information such as distributed quantum networks and quantum key distribution. | ||
11:00 | Probing Electron Self-Energy in the Kagome Superconductor CsV3Sb5 Using Angle-Resolved Photoemission Spectroscopy (ARPES) | Parvin Aliyeva | HEBB 314 |
Abstract: | In this study, we focus on the analysis of self-energy in the kagome superconductor CsV3Sb5 using angle-resolved photoemission spectroscopy (ARPES). ARPES is a technique that allows us to deduce the electronic structure of a material by measuring the kinetic energy of photoelectrons emitted in all directions after being exposed to monochromatic light. From the kinetic energy of the photoelectrons in the vacuum, we extract the energies and momenta of electrons in the material based on conservation laws. CsV3Sb5 is particularly interesting due to its electronic properties arising from the underlying kagome lattice as well as the coexistence of both charge density wave and superconductivity with putatively unconventional characters. To understand what drives these phases, it is critical to analyze the interactions between electrons within the material. Electron self-energy is important in this sense, as it captures the renormalized energy experienced by the electrons as opposed to the band energies predicted by non-interacting models. By analyzing the self-energy, we can gain insights into various many-body interactions such as electron-phonon couplings, which are fundamental to the material’s properties. After correcting the measured energies and converting from angle to momentum, we leverage the sensitivity of ARPES to the single-electron-removal spectral function to access this self-energy in an energy- and band-resolved way. Through this, we find that the electron-phonon coupling in CsV3Sb5 is particularly strong in the kagome lattice-derived bands most affected by the transition to the CDW state, supporting the important role of band-dependent electron-phonon coupling in this system. | ||
11:20 | Characterization of Epitaxial Mn3Ge Thin Films Using Transmission Electron Microscopy | Jack Myra | HEBB 314 |
Abstract: | Transmission Electron Microscopy (TEM) is an experimental technique with significant applications in the characterization of epitaxially grown thin films. The quantum mechanical nature of electrons results in wave-like behavior, with the wavelength of an electron being much shorter than that of visible light. As a result, TEM microscopes are capable of producing atomic and nano-scale image resolutions, and have the ability to produce diffraction patterns at a higher resolution than other crystal diffraction techniques. TEM diffraction patterns of a given material contain structural information that, when used in conjunction with other experimental techniques, can help determine the crystal geometry of the material. A high-resolution diffraction pattern is especially important in the case of epitaxial Mn3Ge thin films, whose various crystallographic phases have similar diffraction patterns due to similar interatomic spacings. Sample thicknesses necessary for TEM (< 100nm) are achieved by mechanically polishing the substrate on which a material was epitaxially grown. In this work, we mechanically polished plan-view TEM samples of Mn3Ge on SiC substrates. We performed selected area diffraction on our samples using a TEM microscope to obtain high resolution diffraction patterns. The selected area diffraction pat terns further confirm the crystallographic structure of our materials. Specifically, the diffraction patterns obtained from TEM provide evidence contributing to the conclusion of a cubic structure in our Mn3Ge thin films | ||
11:40 | Microcavity effect simulation in thin-film materials for applications in perovskite solar cells | Vanessa Smith | HEBB 314 |
Abstract: | Thin-film solar cells are promising devices for converting sunlight into usable electricity, composed of several thin layers of materials deposited on a substrate such as glass or plastic. The active layer of these cells absorbs photons and converts them into electrical current. In some devices, the active layer is perovskite, a cost-effective and lightweight material that is easy to manufacture. Incident light travels through multiple layers before reaching the active layer. Partial transmission and reflection occur at interfaces, causing interference. When the active layer is thin, light passes through with few photons absorbed. However, when the layer thickness is >1μm a “microcavity” form confining light. Here light reflects and interferes, resonating if the cavity’s dimensions are a multiple of the wavelength. Resonance enhances photon absorption due to increased light intensity. Conversely, if the thickness or wavelength is improperly aligned, light may interact destructively, reducing intensity and photon absorption. Wavelength dependence can limit the range of light absorbed by the active layer, which is not ideal for real-world applications. Both the incident wavelength and thickness of microcavities influence the behaviour of thin-film devices. The goal of this project was to examine the changes in the electric field, absorbed, transmitted, and reflected light when microcavities are introduced in thin-film devices. Materials were simulated in Python as stacks of discrete layers. The effect of varying layer thickness, material composition, incident wavelength, and number of layers were calculated using the transfer matrix method. This method calculates light propagation through layers, including absorption, as well as the transmission/reflection and resulting phase shifts at interfaces between materials. These calculations can be used to examine the electric field and photon absorption distributions throughout the device layers. The results of this project allows for the study of light behaviour in thin-film perovskite solar cells when microcavities are present. |
Afternoon Session |
Time | Title | Presenter | Room |
2:00 | X-ray and Neutron Reflectometry of a Thin Film Helimagnet | Thomas Benjamin Lacroix | HEBB 314 |
Abstract: | Spintronics is an emerging field of physics that aims to meet the ever-increasing demands for speed and capacity in information storage. This field necessitates research into novel materials that have relevant magnetic properties that would facilitate the construction of new devices. MnGe is a helimagnet, a material in which the the electron spins form a spiral structure. However, The influence of finite size effects on the magnetic structure of MnGe are unknown. MnGe remains understudied compared to other helimagnets, primarily due to the difficulty in producing stable and high-quality samples. The goal of this work is to create smooth, magnetically isolated MnGe films, with thicknesses comparable to the heliacal pitch in order to explore this open question. This talk will present the growth of MnGe thin films via molecular beam epitaxy (MBE) on silicon wafers, as well as their structural and magnetic characterization. MnGe crystalline films of varying thickness where grown on a 0.5nm seed layer and were capped with a protective Si capping layer. X-ray diffraction was used to verify the crystal structure of the films. By measuring X-ray diffraction in a glancing incidence geometry (known as X-ray reflectometry) we determined the film thicknesses and found that the MnGe roughnesses were small compared to the expected helical wavelength. Magnetic analysis of MnGe was performed using polarized neutron reflectometry (PNR). PNR takes advantage of the wave-particle duality of matter to use neutron waves in a similar way as x-rays are used in x-ray reflectometry. However, since neutrons have a magnetic spin, they can be used to determine the magnetic structure of the sample. Measurements as a function of magnetic field and temperature allow us to map the magnetic phase diagram and determine the helical pitch of our samples. | ||
2:20 | Using a laser floating zone furnace to grow single crystals of the Rost rocksalt high-entropy oxide | Gannon Munro | HEBB 314 |
Abstract: | Physicists need single crystal measurements to learn about the structure-property relationships of novel materials. In particular, even though Rost’s rocksalt has ushered in the unexplored field of high-entropy oxides (HEOs), we are still missing key single crystal measurements. These measurements would enable us to understand the effect of high entropy on its magnetic properties. However, growing useful single crystals of these materials is difficult since they are prone to shifts in composition. We use a laser floating zone furnace to grow the Rost rocksalt HEO with control and precision. We have analyzed the structure and composition of our initial attempts with X-ray diffraction (XRD) and energy-dispersive X-ray spectroscopy (EDS). These results show that we must continue adjusting and fine-tuning our growths to produce a useful single crystal. | ||
2:40 | High Entropy Cuprates: Synthesis and Superconductivity | Helena Cardoso Nunez | HEBB 314 |
Abstract: | While most superconducting materials have low critical temperatures (Tc), cuprates are known for having higher Tc, making their applications more versatile. Superconducting devices are still limited because of the temperature restrictions, making the synthesis of high Tc materials essential for broadening their practical usage. Although high entropy cuprates are being studied for some time, the single-crystal growth of these materials is challenging due to the compositional and growth conditions complexities, such as maintaining equimolar ratios between the elements and resulting in a single-phase crystal. To address this, we utilized a floating zone (FZ) furnace, which allows real-time adjustments to the growth environment via an in-situ camera. We show here that the high-quality growth of these materials is achievable and that they have high Tc comparable to the low entropy cuprates. This allows a better understanding of the electronic and magnetic properties at the intersection between high entropy materials and superconductivity. | ||
3:00 | Spectral signatures of thermal fluctuations in inhomogeneous superconductors | Willem Farmilo | HEBB 314 |
Abstract: | Despite being well-studied, the mechanism of high-temperature superconductivity in the overdoped cuprate regime is still unclear. Uniquely, these systems demonstrate strong spatial inhomogeneity, which makes system-averaged behavior less significant, while local effects dominate. One way to characterize the transition from superconducting to normal metal at a given site is by the closure of the superconducting gap at the Fermi energy in the local density of states (LDOS) spectra. Recent experimental studies show that the superconducting gap ‘filling in’, which does not agree with the superconducting gap ‘closing’ we expect from low-temperature BCS superconductors. One possible source of this behavior is finite temperature effects, such as random thermal fluctuations of the superconducting order parameter. This work uses a Ginzburg-Landau toy model to construct an inhomogeneous superconductor, and arrive at a mean-field result for the superconducting order parameter and spectra. We then evolve this mean-field system in time, adding random complex thermal fluctuations to probe the effect this has on the superconducting LDOS, and compare with our original result. We find that thermal fluctuations do cause gap filling from the mean-field solution, which matches experimental behavior. However, the residual gap closing from our mean-field solution does not match experiment, leaving unanswered questions about the LDOS behavior in overdoped cuprate superconductors. | ||
3:20 | Experimental adjudication between causal accounts of Bell-inequality violations via statistical model selection | Liam Morrison | HEBB 314 |
Abstract: | Bell’s Theorem is central to our understanding of nature and a key result for emerging quantum technologies. This result places strict bounds on the strength of measured correlations if nature obeys a commonsense model (called a local hidden variable model). Quantum mechanical correlations famously violate this bound, and so it was found that local hidden variable theory cannot reproduce the predictions of quantum mechanics. To date, experimental evidence has confirmed that nature violates Bell’s inequalities, however, there has been no consensus on what description accurately replicates nature. In the absence of a clear path of investigation, the framework of causal statistics may suggest some direction. Causal models, which are mathematical constructs representing causal relationships within a system or population, enable the inference of causal relationships from statistical data. In our experiment, we consider a number of causal accounts of a Bell-like scenario, each of which aims to isolate one or more of the key assumptions of Bell’s Inequality. We implement these causal discovery algorithms on data collected from a photonic Bell scenario experiment. Our objective is to assess these models using a train-test methodology to adjudicate different causal explanations. This talk aims to cover the theory and methodologies of our experimental design and discuss the implications of our findings for the world of quantum information science. | ||
3:40 | From WWII to NASA: The Lost Art of Making Super Sensitive Permalloy Based Fluxgate Magnetometers | Pranav Advani | HEBB 314 |
Abstract: | Developed during World War II for “submarine detection” [1], fluxgate magnetometers are low noise, sensitive, and precise magnetic field detectors made up of two coils and a permalloy core. After World War II, this device was used by the Department of Mines and Technical Surveys for “aeromagnetic surveys” leading to the discovery of “the iron ore deposit at Marmora” in Ontario [1]. Fluxgate magnetometers were then used by NASA on Pioneer XI to “investigate Jupiter’s magnetic field” [2]. The fluxgate magnetometers used in World War II, used by the Department of Mines and Technical Surveys, and used by NASA were all made using the same original batch of permalloys synthesized in World War II. Presently, the permalloy used to make fluxgate magnetometers is running out and the art of making ring cores for fluxgate magnetometers has been lost. Therefore, NASA launched a research project with the University of Iowa to make a new generation of fluxgate magnetometers. NASA plans on using the fluxgate magnetometers to measure the solar activity when the “Sun’s poles flip” in April 2025 [3]. This presentation will delve into the design of a fluxgate magnetometer, highlighting challenges met along the way. It will cover important properties of the permalloy, speaking to the sensitivity of the device, the components and circuits needed, and testing methodologies applied to ensure functionality of the device. Works Cited: [1] Government of Canada, N. R. C. (2019, March 1). Fluxgate Airborne Magnetometer. Government of Canada, Natural Resources Canada, Canadian Hazards Information Service. https://geomag.nrcan.gc.ca/lab/vm/fluxgate-en.php [2] Acuna, M. H., & Ness, N. F. (n.d.). The pioneer 11 high-field fluxgate magnetometer – NASA technical reports server (NTRS). NASA. https://ntrs.nasa.gov/citations/19730022673 [3] Gilliard, G. (2024, July 16). Fluxgate magnetometers go back to the future. Medium. https://focus.science.ubc.ca/fluxgate-magnetometers-d48b98737c84 | ||
4:00 | Skyrmionic structures in models of chiral magnetic materials | Aryan Tiwari | HEBB 314 |
Abstract: | Magnetic skyrmions have garnered research interest in condensed matter systems, due to their presence in various chiral magnetic structures in recent literature, alongside their potential applications in flash memory and spintronics.
In this talk, we briefly investigate various energy interactions associated with a 2D chiral magnetic sheet, in which energy functionals produce magnetic skyrmions as minimal energy configurations for certain regimes in the parameter space. Specifically, by restricting to periodic minimizers, we obtain a sample discretized phase diagram for an energy functional involving direct exchange, Dyzaloshinskii Moriya, and z easy axis anisotropy interactions. We contrast this with a similar discretized phase diagram for an energy functional involving direct exchange, Dzyaloshinskii Moriya, and z Zeeman interactions, characterized in previous work, and note that the anisotropy interaction acts to suppress skyrmionic lattice minimizers. Additionally, we find that the demagnetization field interaction acts to suppress skyrmionic lattice minimizers in the presence of the Zeeman interaction, and may potentially shift the phase boundary between spin helix and skyrmionic lattice regimes. |
Geophysics, AMO, TheoryMorning Session |
|||
Time | Title | Presenter | Room |
10:00 | Structural Geology Meets Deep Learning | Simon Ghyselincks | HEBB 316 |
Abstract: | Structural geology examines the form, arrangement, and distribution of rocks and strata within the Earth’s crust, playing a crucial role in subsurface modeling and mineral exploration. However, creating accurate subsurface models is a complex, ill-posed problem, hindered by the scarcity of ground truth field data and the uncertainties associated with interpreting geological processes that span vast temporal and spatial scales. Traditional methods for generating 3D geological models are labor-intensive, typically resulting in a single, static representation that fails to capture the full range of potential geological structures.This work presents a new approach to automating the generation of plausible 3D structural geology models using a geophysical computation engine coupled with random variables. The synthetic data generated by the engine is then used to train a flow-based generative AI model via stochastic interpolation. New 3D geological structures are sampled using a numerical ODE solver operating in a high-dimensional velocity field. This velocity field is learned as part of an optimization problem and parameterized with a deep neural network. The resulting models have broad applications, including the production of training data for geophysical modeling, such as geomagnetic and gravity inversions, and filling in missing data within sparsely sampled datasets.A key challenge in this methodology is generating a sufficiently diverse starting dataset that accurately reflects the heterogeneity and complexity of natural geological formations. To address this, we simulate the formation, deformation, erosion, and intrusion of stratigraphic layers over time using a Markov chain process. This comprehensive approach offers a scalable solution for generating complex geological models that can be integrated into a variety of geophysical applications. | ||
10:20 | Avalanches in a Granular System | Carson Harvey | HEBB 316 |
Abstract: | Granular materials and avalanches are widespread in nature and are observed in phenomena like landslides, rockfalls, and glacier movements. Studying these events in a controlled setting provides deep insights into their complex behaviours. We investigate the collapse of a model system which consists of a micro-metric monodisperse column of oil droplets. This system allows precise control over experimental conditions and direct visualization of individual particles. We construct a 2D rectangular column of oil droplets where we can initiate the collapse, and vary parameters such as droplet size, cohesion strength, gravitational effects, and particle disorder. Through these experiments, we aim to analyze how these variables influence the collapse dynamics, enhancing our understanding of the underlying processes in granular avalanches. | ||
10:40 | Point retrieval of liquid water fraction using artificial neural networks | Perran Trentalange | HEBB 316 |
Abstract: | Sea ice extent or concentration is a crucial measurement in remote sensing and climate science in the Arctic. This measurement aids in understanding Arctic amplification, where the reduction in sea ice accelerates warming due to the decreasing reflective surface as the ice recedes. This value is also used in methods of Optimal Estimation (OE) to make more accurate measurements around the edges of ice sheets and land masses. This neural network was trained on microwave data from Advanced Technology Microwave Sounder (ATMS) and sea and land fractions from European Centre of Medium-ranged Weather Forecasts Reanalysis v5 (ERA5). ATMS data is comprised of 22 channels ranging from 50 GHz to 180 GHz with a resolution of 14km straight down, these channels and the zenith angle of each measurement was used in the training of the neural network. The network had 23 inputs that feed into two hidden layers of 50 nodes each, then output the two values of sea-ice and land fractions. This was then trained for 150 epochs at a learning rate ranging from 1e-9 to 1e-6 with a cosine-based scheduler function. When tested against specific test points that were purposely excluded from the training and validation data set this gave a correlation coefficient of 0.978. A difficulty to using this technique is finding accurate representations of the sea mask for each individual satellite, the data that this model was trained on has potential inaccuracies due to ERA5 and ATMS not having the same resolution or track. Thus, with a definitively accurate dataset this model could be trained to a higher accuracy and help improve the accuracy of OE retrievals at the edge of ice sheets and on shorelines. | ||
11:00 | Broken to Pieces: The Way Thin Films Crack | Ashvini Muralitharan | HEBB 316 |
Abstract: | Cracks are found everywhere – a crevice in a sidewalk, fissures in dried mud, or even cracks in the earth’s crust. Highly controlled systems can influence the formation of these cracking patterns as seen in artisanal works like pottery, or the aged craquelures of the Mona Lisa. Our work investigates the formation of crack patterns in thin nano-metric polymer films when exposed to a solvent. Remarkably, we see cracks that propagate with a well defined sinusoidal or repeating crescent pattern. We have found a correlation between the wavelength of the cracking pattern and the thickness of the polymer films. Having control over the crack patterns by manipulating film thickness and the solvent exposure mechanisms suggests that crack morphology is predictable in thin films. Our thin-film experiments are expected to inform the fundamental physics of crack formation and are relevant on length scales from the thin film coatings on devices to those that appear on geological scales. | ||
11:20 | Storage of Optical Pulses in Cold Atoms | Sophie Gans | HEBB 316 |
Abstract: | Quantum memory is the storage and retrieval of optical pulses, essential for all quantum information, communication and computing. The higher the memory efficiency, the faster operations can be performed. To obtain this memory, short 780nm laser pulses were sent into an atomic cloud of Rb-87 and stored for 200ns before retrieval. Through optimization of experimental parameters such as bias magnetic fields, beam polarizations and beam powers, an average memory efficiency of 8% was obtained. With this improvement in memory efficiency, the system has been adapted to operate on the single-photon level, allowing for the storage and retrieval of true quantum information. | ||
11:40 | Infrared free electron laser split-pump and two-colour experiments in ultracold helium nanodroplets | Myles B T. Osenton | HEBB 316 |
Abstract: | Ultracold helium nanodroplets provide an ideal matrix for gas-phase vibrational spectroscopy, reducing thermal broadening and spectral congestion while unperturbed significantly by solvent effects and interactions with dopant ions. In the experiment, ions are embedded in a helium droplet and irradiated with a burst-mode infrared free electron laser (FEL). Resonant absorption of photons from the FEL pulse train by the ion dissipates energy to the surrounding helium, causing evaporation and detection of the bare ion as as a function of wavelength and generating low noise spectra. FEL infrared action spectroscopy in ultracold helium nenodroplets also provides new opportunities for the probing of intra- and intermolecular energy transfer, vitally important in most chemical and physical processes. In order to explore the processes of energy transfer between defined quantum levels, pump-probe techniques can be implemented, where the absorption of a probe photon following the absorption of a pump photon can provide information about the quantum state of the system. Pump-probe experiments are largely performed in the condensed phase and are limited by sample heterogeneities and photon heating effects. Conversely, these experiments on dilute gas-phase samples embedded in helium nanodroplets must detect correlated absorption of two photons while avoiding heating the sample from excessive photon absorption. Here we explore the uses of ultracold helium for IR action spectroscopy for novel molecular detection and probing, including for carbocations and proton localisation within molecular cavities provided by 12-crown-4 ethers. We will also show the experimental design and process for controlling split pump and simultaneous two-colour FEL experiments with adjustable spatial and temporal pulse overlap using second-harmonic generation at near to far IR. |
Afternoon Session |
Time | Title | Presenter | Room |
2:00 | Analysis of Precision Object Tracking for Determining Particle Properties in Optical Trapping | Ian Baker | HEBB 316 |
Abstract: | Optical trapping is a method of microscopic particle manipulation that has found application in a wide range of fields from biomedicine to nanoengineering. However, to determine properties of the trapped particle while in the trap current methods generally rely on expensive quadrant photodiodes (QPD) and fast oscilloscopes. Additionally, the experimental setups for these methods are uncommon and inflexible making it difficult to conveniently perform other forms of experimentation simultaneously. To attempt to circumvent these issues we sought to use a simple camera to obtain information about the particle through statistical distribution of the particle’s location under Brownian motion. This required methods that could automatically track the particle in the recorded images in ways that were precise and consistent without being computationally expensive. This prompted the use of a Kernal Correlation Filter (KCF) object tracker, based on discrimitive classification, and of new tracking methods based on object characterization, ProjPeak and Mean of Minimums (MOM). All trackers were checked for consistency under repeated trials and compared against expected values for particle motion. It was found object characterization was significantly more reliable over discriminative classification for precision object tracking and that tracking could provide useful information about trapped particles without prohibitive computational time. As such, object characterizing trackers may be useful in optical trapping implementation with modification to the specific case. | ||
2:20 | Development of a Modulation-Free Laser Source for Atomic Physics Experiments | Belen Llaguno | HEBB 316 |
Abstract: | The most common technique for laser frequency stabilization involves frequency modulation of the light source and a lock-in amplifier. We use the modulation-free “Doppler-free Dichroic Atomic Vapor Laser Lock†(DF-DAVLL) technique to stabilize the frequency of homebuilt diode lasers operating at 780 nm and 795 nm. These stabilized lasers can support ongoing experiments, such as atom interferometry, measurements of diffusion using optical lattices, and determination of excited state lifetimes. The DF-DAVLL requires a saturated absorption spectrometer including a rubidium vapor cell in a 20 G magnetic field. The absorption of a weak linearly polarized probe laser is monitored in the presence of a strong counterpropagating pump laser. The probe laser can be considered to consist of two orthogonal circular polarizations. In the presence of a magnetic field, the absorption spectra of these components exhibit a Zeeman energy shift. By polarization-selecting, recording and subtracting these absorption spectra on separate photodetectors, we generate a dispersion-shaped error signal to stabilize the laser frequency. Our system relies on homebuilt electronic circuits, thereby avoiding expensive commercial lock-in amplifiers. Laser frequency stability is ideally determined by beating two independent locked lasers at slightly different frequencies and measuring the Allan deviation (AD) of the beat note. Instead, we frequency-stabilize the laser with the DF-DAVLL technique and calculate the Allan deviation from a record of the error signal voltage as a function of time. Results are affected by noise on various timescales, which determines the laser systems’ suitability for experiments. We report an AD floor of ~ 2×10-11 and lock durations of ~ 2 hours. | ||
2:40 | Feedback System for Laser Intensity Stabilization | Joseph Cuzzupoli | HEBB 316 |
Abstract: | We have constructed a feedback system for stabilizing the intensity of diode lasers for experiments in precision metrology. The system uses an analog circuit to generate a correction signal that regulates an acousto-optic modulator (AOM) so as to stabilize the output power of the laser. A portion of the light diffracted by the AOM is incident on a photodetector. The signal from this photodetector is compared to a setpoint voltage using a subtractor, where the output is then amplified and integrated with a time constant that can be as small as 100 ðœ‡s. The resulting signal controls a radio frequency (RF) attenuator that modulates a voltage-controlled oscillator driving the AOM. In this scheme, if the laser power fluctuates, the RF signal to the AOM changes so as to stabilize the laser intensity. We have also constructed a sample and hold module that engages the feedback loop when a control (TTL) pulse is applied and disengages the feedback when the control pulse is off. With the feedback engaged, we find that the laser intensity can be stabilized on time scales ranging from 100 to 20 ms. The standard deviation of the intensity fluctuations is typically reduced by more than a factor of 3. We find a similar reduction in laser intensity fluctuations when the feedback loop is operated with the sample and hold module, suggesting that our results can be extended to pulsed laser experiments. | ||
3:00 | Quantum Decoherence and Entanglement | Arsam Najafian | HEBB 316 |
Abstract: | The study of decoherence, a purely quantum mechanical phenomenon, aims to address how a classical picture can arise from a fundamentally quantum description of the world. This talk aims to expose the audience to basic concepts associated with decoherence and some of the formalism that it is used to study it. Fundamentally, decoherence is a consequence of quantum entanglement of a system with its environment or entanglement between the subparts of a larger system. It is this entanglement with the environment that causes a subsystem probability distribution to behave more similar to a classical ensemble of quantum states as opposed to what we would expect from a superposition of them. Another important point is that decoherence is basis dependent, and thus we will discuss the idea of pointer states, which are in essence the states most immune to decoherence under unitary time evolution of the quantum state. For the required formalism, there will be a discussion of density operators and reduced density operators which are necessary to describe the measurement statistics of an entangled subsystem on its own. There will be a few short examples included to illustrate the above ideas. | ||
3:20 | On the Hopf Fibration and its applications | Aiden Magor | HEBB 316 |
Abstract: | The Hopf map or Hopf fibration is named after Heinz Hopf, who studied it in 1931. This mapping sends S3 to S2. More specifically, the Hopf fibration maps each distinct great circle (S1) on S3 to a distinct ray in CP1. We can then construct an isomorphism from CP1 to S2, allowing us to understand S3 by visualizing its construction with circles and a sphere. The Hopf map is an example of a non-trivial fiber bundle. A fiber bundle is a structure that locally resembles a Cartesian product but may have a different global form. Essentially, it is a continuous “many-to-one” map that takes circles to S2. In this talk, we will explore the definition of a fiber bundle, how the Hopf map is constructed, and one of its many applications. Specifically, we will learn about the Bloch sphere, a geometric representation of all the states of a two-state quantum system as points on S2. Please note, this talk is made for those who have not yet been introduced to any topology. | ||
3:40 | Zero-bias anomaly in double quantum dots: A study of the effect of lead coupling | Caden Drover | HEBB 316 |
Abstract: | Ensembles of two-site quantum systems described by the Fermi-Hubbard model with disorder in site potentials are known to exhibit a zero-bias anomaly (ZBA) in the ensemble-average density of states (DOS), with a width proportional to the intersite hopping amplitude. A physical interpretation for these systems comes in the form of the double quantum dot, and a physical explanation for the ZBA is provided by the study of parallel-coupled charge transport through the dots. Prior work predicted a ZBA but neglected the effects of lead coupling. The purpose of this project was to answer the question: what changes, if any, will occur in double quantum-dot charge transport and ensemble-average DOS measurements when the presence of electronic leads is considered? Analytic calculations of transition rates, eigenstate probabilities, and electron current in the weak-coupling limit using Fermi’s golden rule are carried out to find the answer. Exploring the evolution of charge transport with bias voltage then allows for the determination of the ensemble-average DOS. By treating electronic leads as weak perturbations, a ZBA in the ensemble of parallel double quantum dots is confirmed. However, the appearance of this anomaly is found to differ when varying relative tunnel-coupling strengths and temperature, with the most prominent differences directly attributed to the occurrence of transitions out of excited states. | ||
4:00 | Tensors in Physics | Arnav Khandelwal | HEBB 316 |
Abstract: | In this talk, I will discuss an introduction to the tensor formalism from a physical perspective (“a tensor is something that transforms like a tensor”/is invariant under a change of coordinates). I will be discussing covariance and contravariance in the context of vectors and covectors, and derive how they transform using intuition (ex: when you express a vector in a basis, and then express it in a new basis that has it’s basis vectors halved, the components double in size, so vector components transform contravariantly). I will talk about how we can think of covectors as row vectors, and how we can visualize them as level sets. I will then talk about the metric tensor, and it’s transformation rules and how we derive them. I want to end off on discussing covariant formulations of physical laws. I will use the covariant formulation of Maxwell’s equations as an example, in terms of the four-force Lorentz Law and the Faraday Tensor, and expressing the equations (and all equations of physics) in such a way that they are invariant under Lorentz Transformations. |