Scholar Commons - SC Upstate Research Symposium: HM-2 A Sunrise You Can Hear: Blending Code, Color, and Sound
 

HM-2 A Sunrise You Can Hear: Blending Code, Color, and Sound

SCURS Disciplines

Computer Sciences

Document Type

Oral Presentation

Abstract

A Sunrise You Can Hear: Blending Code, Color, and Sound

Color perception is an important part of the human experience, yet the ability to perceive colors is not possible for those individuals who are blind or visually impaired. Tremendous strides have been made in developing technology that assists blind or visually impaired individuals to perceive colors through sound. In this project, we explore how a blind or visually impaired individual can hear an artistic, natural phenomenon such as a sunrise. By creating an algorithmic music video, we investigate how blind individuals can associate colors with sounds thereby creating a computer-generated audiovisual experience that creates a realistic sunrise simulation.

In this study, visible light frequencies are algorithmically mapped to audible sound frequencies. The project consists of three key components: color gradient generation, mapping colors to audio, and music video creation. First, the visual aspect of the sunrise is created using Wolfram Language. The gradient colors of the sky transition from black to red, red to yellow, and yellow to blue. Second, an audio composition is generated that matches each color transition to harmonic progressions in real-time. This is done by dividing a piano keyboard into three sections: red (left), green (middle), and blue (right), where each section corresponds to a specific set of pitches. As the sunrise grows, the mapped pitches are played by beautifully sounding instruments such as the Shakuhachi and Shamisen, producing a rich, immersive sound. Additional audio normalization and fading techniques are used to enhance the listening experience. Finally, the video component is created by generating multiple frames using graphical transformations in Wolfram Language. Each frame consists of a gradient sky background that changes according to the progression of the sunrise. The lower portion of the frame features a darkened horizon landscape, providing contrast to the transitioning colors of the sky. All of the frames are then combined into a video sequence, synchronized with the generated audio, resulting in a music video representation of a sunrise.

The results of this study show a correlation between visual and auditory senses, proving that you can experience visual beauty through sound. This work highlights the future potential of algorithmic tools to create fascinating multimedia experiences, even through the use of Artificial Intelligence (AI). Moreover, this project opens pathways for broader applications in assistive technology, education sectors, and artistic expressions.

Keywords

Algorithmic Music Generation, Wolfram Language, Assistive Technology

Start Date

11-4-2025 2:55 PM

Location

CASB 104

End Date

11-4-2025 3:10 PM

This document is currently not available here.

Share

COinS
 
Apr 11th, 2:55 PM Apr 11th, 3:10 PM

HM-2 A Sunrise You Can Hear: Blending Code, Color, and Sound

CASB 104

A Sunrise You Can Hear: Blending Code, Color, and Sound

Color perception is an important part of the human experience, yet the ability to perceive colors is not possible for those individuals who are blind or visually impaired. Tremendous strides have been made in developing technology that assists blind or visually impaired individuals to perceive colors through sound. In this project, we explore how a blind or visually impaired individual can hear an artistic, natural phenomenon such as a sunrise. By creating an algorithmic music video, we investigate how blind individuals can associate colors with sounds thereby creating a computer-generated audiovisual experience that creates a realistic sunrise simulation.

In this study, visible light frequencies are algorithmically mapped to audible sound frequencies. The project consists of three key components: color gradient generation, mapping colors to audio, and music video creation. First, the visual aspect of the sunrise is created using Wolfram Language. The gradient colors of the sky transition from black to red, red to yellow, and yellow to blue. Second, an audio composition is generated that matches each color transition to harmonic progressions in real-time. This is done by dividing a piano keyboard into three sections: red (left), green (middle), and blue (right), where each section corresponds to a specific set of pitches. As the sunrise grows, the mapped pitches are played by beautifully sounding instruments such as the Shakuhachi and Shamisen, producing a rich, immersive sound. Additional audio normalization and fading techniques are used to enhance the listening experience. Finally, the video component is created by generating multiple frames using graphical transformations in Wolfram Language. Each frame consists of a gradient sky background that changes according to the progression of the sunrise. The lower portion of the frame features a darkened horizon landscape, providing contrast to the transitioning colors of the sky. All of the frames are then combined into a video sequence, synchronized with the generated audio, resulting in a music video representation of a sunrise.

The results of this study show a correlation between visual and auditory senses, proving that you can experience visual beauty through sound. This work highlights the future potential of algorithmic tools to create fascinating multimedia experiences, even through the use of Artificial Intelligence (AI). Moreover, this project opens pathways for broader applications in assistive technology, education sectors, and artistic expressions.