READ ME ======================== General information ======================== Author: Christopher Harrison / James Trayford Contact: christopher.harrison@newcastle.ac.uk / james.trayford@port.ac.uk DOI: To Be Confirmed License: CC-BY Last updated: 13/05/2023 Related article: "Inspecting spectra with sound: proof-of-concept & extension to datacubes" by J. Trayford, C. Harrison, R.C. Hinz, M. Kavanagh Blatt, S. Dougherty and A. Girdhar. Will appear in the Royal Astronomical Society Techniques and Instruments (first review comments addressed). ======================== Introductory information ======================== Files included in the data deposit (include a short description of what data are contained): This project used the STRAUSS code (https://github.com/james-trayford/strauss) to turn galaxy spectra into sound. The galaxy spectra used came from the Sloan Digital Sky Survey (SDSS; http://sdss.org/). These sounds were played to volunteer participants who were asked to rank the spectra based on what they heard. The study investigates if these ranks corresponded to the physical variation in the galaxy spectra. The testing was split into three separate tests (Test A, Test B and Test C), using spectra that varied three different properties. Before each test, the participants were played six example sounds during their "training". This data release contains: 1. This README file. 2. A transcript of the survey taken by the participants in a .docx file. This was presented through Google Forms during testing. 3. A summary document of the galaxy spectra used for each test. The document contains three tables (one for each test). The ten different questions in the test are listed with the galaxy spectrum used for each, including their unique ID numbers from the Sloan Digital Sky Survey. The main property of interest for each test is also shown in the table. 4. The audio files used during the training of the participants. These are shared as three zip files, with one zip file for each test presented to the participants (A, B and C). Each zip file contains six .mp4 files (with blank visuals) and six .mp4 files (the audio with corresponding visualisation). The visuals were not shown to participants but are provided here for convenience/interest. For each test there were three examples shown for each extremes of the "low" and "high" rankings. 5. The audio files used during the testing of the participants. These are shared as three zip files, with one zip file for each test (A, B and C). Each zip file contains ten .wav files (audio file only) and ten .mp4 files (the audio with attached visualisation). The visuals were not shown to participants but are provided here for convenience/interest. The ten files correspond to the ten questions used in each test. 6. An animated (audio-visual) version of Figure 3 from the main publication. This is in an mp4 video format. Explain the relationship between multiple data sets, if required: N/A ========================== Methodological information ========================== A brief method description of what the data is, how and why it was collected or created, and how it was processed: The methods for each individual file are described above. Instruments, hardware and software used: + STRAUSS Python package: https://github.com/james-trayford/strauss + Other python packages: matplotlib, astropy, numpy + ffmpeg (command link package to make the movies) Date(s) of data collection: Audio-visual files generated in Feb/March 2022 Document files finalised May 2023 Geographic coverage of data: N/A Data validation (how was the data checked, proofed and cleaned): N/A Overview of secondary data, if used: N/A ========================= Data-specific information ========================= Definitions of names, labels, acronyms or specialist terminology uses for variables, records and their values: N/A Explanation of weighting and grossing variables: N/A Outline any missing data: None