Merge branch 'audio_fonctionnel' into 'main'

Audio fonctionnel

See merge request jevalideca/yapp/breathaudio-backend!1
This commit is contained in:
francois 2023-05-08 02:26:38 +00:00
commit e67b5fb4b0
16 changed files with 567 additions and 90 deletions

3
.gitignore vendored Normal file
View file

@ -0,0 +1,3 @@
/sources/
/tmp_sound/
/.idea/

23
Dockerfile Normal file
View file

@ -0,0 +1,23 @@
FROM python:3.10-slim
# Install system dependencies
RUN apt-get update
RUN apt-get install -y \
ffmpeg
# Expose the application port
EXPOSE 8000
# Set the application working directory
WORKDIR /app
# Install the app's dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY *.py ./
RUN python install.py
# Run the application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

220
LICENSE Normal file

File diff suppressed because one or more lines are too long

View file

@ -1,92 +1,2 @@
# breathaudio-backend
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://git.jevalide.ca/gitlab/jevalideca/yapp/breathaudio-backend.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://git.jevalide.ca/gitlab/jevalideca/yapp/breathaudio-backend/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.

39
animate_fractal.py Normal file
View file

@ -0,0 +1,39 @@
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation
from mandelbrot import mandelbrot
def animategif():
x_start, y_start = -2, -1.5 # an interesting region starts here
width, height = 3, 3 # for 3 units up and right
density_per_unit = 100 # how many pixels per unit
# real and imaginary axis
re = np.linspace(x_start, x_start + width, width * density_per_unit)
im = np.linspace(y_start, y_start + height, height * density_per_unit)
fig = plt.figure(figsize=(5, 5)) # instantiate a figure to draw
ax = plt.axes() # create an axes object
def animate(i):
ax.clear() # clear axes object
ax.set_xticks([], []) # clear x-axis ticks
ax.set_yticks([], []) # clear y-axis ticks
X = np.empty((len(re), len(im))) # re-initialize the array-like image
threshold = round(1.15 ** (i + 1)) # calculate the current threshold
# iterations for the current threshold
for i in range(len(re)):
for j in range(len(im)):
X[i, j] = mandelbrot(re[i], im[j], threshold)
# associate colors to the iterations with an iterpolation
img = ax.imshow(X.T, interpolation="bicubic", cmap='magma')
return [img]
anim = animation.FuncAnimation(fig, animate, frames=45, interval=120, blit=True)
return anim

Binary file not shown.

2
blend_av.py Normal file
View file

@ -0,0 +1,2 @@
def blend_av(audio, video):
pass

19
build-local.sh Normal file
View file

@ -0,0 +1,19 @@
#!/usr/bin/zsh
#
# Copyright (C) 2023 François Pelletier - Je valide ça, service-conseil
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
docker build -t local/breathaudio-backend .

25
docker-run.sh Normal file
View file

@ -0,0 +1,25 @@
#!/usr/bin/zsh
#
# Copyright (C) 2023 François Pelletier - Je valide ça, service-conseil
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
docker stop breathaudio-backend
docker rm breathaudio-backend
# Ce programme sert à lancer le breathaudio-backend dans un docker localement pour tester
docker run -p 8052:8052 --name breathaudio-backend --network host \
--volume "${PWD}/tmp_sound":"/app/tmp_sound" \
--volume "${PWD}/bell":"/app/bell" \
local/breathaudio-backend

11
generate_files.http Normal file
View file

@ -0,0 +1,11 @@
GET http://localhost:8000/generate_audio
content-type: application/json; charset=utf-8
{
"main_frequency": 200,
"lag_frequency": 202,
"volume": 0.05,
"breath_pattern_in": 4,
"breath_pattern_out": 8,
"bell_sound": "339810__inspectorj__hand-bells-a-single-mod.wav"
}

15
install.py Normal file
View file

@ -0,0 +1,15 @@
# Copyright (C) 2023 François Pelletier - Je valide ça, service-conseil
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

161
main.py Normal file
View file

@ -0,0 +1,161 @@
import os
import pathlib
from fastapi import FastAPI
from fastapi.responses import FileResponse
from pydantic import BaseModel
from fastapi.testclient import TestClient
from AccelBrainBeat.brainbeat.binaural_beat import BinauralBeat
import pydub
import random
from animate_fractal import animategif
from blend_av import blend_av
app = FastAPI()
class App(BaseModel):
app: str
class AudioProperties(BaseModel):
"""
Audio properties
"""
main_frequency: int
lag_frequency: int
volume: float
breath_pattern_in: int
breath_pattern_out: int
bell_sound: str
class VideoProperties(BaseModel):
"""
Video properties
"""
main_frequency: int
breath_pattern_in: int
breath_pattern_out: int
color_scheme = str
class AvProperties(BaseModel):
"""
Audio and video properties
"""
main_frequency: int
lag_frequency: int
volume: float
breath_pattern_in: int
breath_pattern_out: int
bell_sound: str
color_palette = str
@app.get("/")
async def get_root():
appresponse = App(app='breathaudio')
return appresponse
@app.get("/generate_audio")
async def get_generate_audio(audio_properties: AudioProperties):
"""
Generate audio
:param audio_properties:
:return:
"""
tmp_dir = pathlib.Path("tmp_sound")
tmp_dir.mkdir(parents=True, exist_ok=True)
# Remove existing files in tmp_dir
for filename in os.listdir(tmp_dir):
os.remove(tmp_dir / str(filename))
bell_dir = pathlib.Path("bell")
filename_prefix = ''.join([random.choice('0123456789ABCDEF') for i in range(32)])
filename = f"{filename_prefix}.wav"
# Generate binaural beat
b = BinauralBeat()
b.save_beat(output_file_name=str(tmp_dir / "binaural_beat.wav"),
frequencys=(audio_properties.main_frequency,
audio_properties.lag_frequency),
play_time=(audio_properties.breath_pattern_in + audio_properties.breath_pattern_out) * 2,
volume=audio_properties.volume
)
pydub_background = pydub.AudioSegment.from_wav(tmp_dir / "binaural_beat.wav")
bell_file = ""
if audio_properties.bell_sound == "Cloche":
bell_file = "339810__inspectorj__hand-bells-a-single-mod.wav"
skip_bell = False
elif audio_properties.bell_sound == "Bol tibétain":
bell_file = "417116__dersinnsspace__tibetan-bowl_right-hit-mod.wav"
skip_bell = False
else:
skip_bell = True
if skip_bell:
pydub_mix = pydub_background
else:
pydub_bell = pydub.AudioSegment.from_wav(bell_dir / bell_file)
pydub_mix = pydub_background. \
overlay(pydub_bell, position=0). \
overlay(pydub_bell, position=audio_properties.breath_pattern_in * 1000). \
overlay(pydub_bell,
position=(audio_properties.breath_pattern_in + audio_properties.breath_pattern_out) * 1000). \
overlay(pydub_bell,
position=(audio_properties.breath_pattern_in * 2 + audio_properties.breath_pattern_out) * 1000)
pydub_mix.export(tmp_dir / filename, format="wav")
return FileResponse(tmp_dir / filename)
@app.get("/generate_video")
async def get_generate_video(video_properties: VideoProperties):
"""
Generate video
:param video_properties:
:return:
"""
tmp_dir = pathlib.Path("tmp_video")
tmp_dir.mkdir(parents=True, exist_ok=True)
# Remove existing files in tmp_dir
for filename in os.listdir(tmp_dir):
os.remove(tmp_dir / str(filename))
anim = animategif()
filename_prefix = ''.join([random.choice('0123456789ABCDEF') for i in range(32)])
filename = f"{filename_prefix}.gif"
anim.save(str(tmp_dir / filename), writer='imagemagick')
return FileResponse(str(tmp_dir / filename))
@app.get("/generate_av")
async def get_generate_av(av_properties: AvProperties):
"""
Generate audio and video using blend_av
:param av_properties:
:return:
"""
a = AudioProperties()
a.main_frequency = av_properties.main_frequency
a.breath_pattern_in = av_properties.breath_pattern_in
a.breath_pattern_out = av_properties.breath_pattern_out
a.bell_sound = av_properties.bell_sound
audio = get_generate_audio(a)
v = VideoProperties()
v.main_frequency = av_properties.main_frequency
v.breath_pattern_in = av_properties.breath_pattern_in
v.breath_pattern_out = av_properties.breath_pattern_out
v.color_palette = av_properties.color_palette
video = get_generate_video(v)
av = blend_av(audio, video)
return FileResponse(av)
client = TestClient(app)
def test_getroot():
response = client.get("/")
assert response.status_code == 200

21
mandelbrot.py Normal file
View file

@ -0,0 +1,21 @@
# Source: https://matplotlib.org/matplotblog/posts/animated-fractals/
def mandelbrot(x, y, threshold):
"""Calculates whether the number c = x + i*y belongs to the
Mandelbrot set. In order to belong, the sequence z[i + 1] = z[i]**2 + c
must not diverge after 'threshold' number of steps. The sequence diverges
if the absolute value of z[i+1] is greater than 4.
:param float x: the x component of the initial complex number
:param float y: the y component of the initial complex number
:param int threshold: the number of iterations to considered it converged
"""
# initial conditions
c = complex(x, y)
z = complex(0, 0)
for i in range(threshold):
z = z ** 2 + c
if abs(z) > 4.: # it diverged
return i
return threshold - 1 # it didn't diverge

9
requirements.txt Normal file
View file

@ -0,0 +1,9 @@
fastapi~=0.95.0
pydantic~=1.10.7
uvicorn~=0.21.1
starlette~=0.26.1
AccelBrainBeat~=1.0.5
httpx~=0.23.3
pydub~=0.25.1
numpy~=1.24.2
matplotlib~=3.7.1

19
run-app.sh Normal file
View file

@ -0,0 +1,19 @@
#!/usr/bin/zsh
#
# Copyright (C) 2023 François Pelletier - Je valide ça, service-conseil
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
streamlit run main.py