
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (89)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (10249)
-
Ffmpeg - transform mulaw 8000khz audio buffer data into valid bytes format
24 décembre 2023, par Bob LozanoI'm trying to read a bytes variable using ffmpeg, but the audio stream I listen to, sends me buffer data in mulaw encoded buffer like this :


https://github.com/boblp/mulaw_buffer_data/blob/main/buffer_data


I'm having trouble running the ffmpeg_read function from the transformers library found here :


def ffmpeg_read(bpayload: bytes, sampling_rate: int) -> np.array:
"""
Helper function to read an audio file through ffmpeg.
"""
ar = f"{sampling_rate}"
ac = "1"
format_for_conversion = "f32le"
ffmpeg_command = [
 "ffmpeg",
 "-i",
 "pipe:0",
 "-ac",
 ac,
 "-ar",
 ar,
 "-f",
 format_for_conversion,
 "-hide_banner",
 "-loglevel",
 "quiet",
 "pipe:1",
]

try:
 with subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE) as ffmpeg_process:
 output_stream = ffmpeg_process.communicate(bpayload)
except FileNotFoundError as error:
 raise ValueError("ffmpeg was not found but is required to load audio files from filename") from error
out_bytes = output_stream[0]
audio = np.frombuffer(out_bytes, np.float32)
if audio.shape[0] == 0:
 raise ValueError(
 "Soundfile is either not in the correct format or is malformed. Ensure that the soundfile has "
 "a valid audio file extension (e.g. wav, flac or mp3) and is not corrupted. If reading from a remote "
 "URL, ensure that the URL is the full address to **download** the audio file."
 )
return audio



But everytime I get :


raise ValueError(
 "Soundfile is either not in the correct format or is malformed. Ensure that the soundfile has "
 "a valid audio file extension (e.g. wav, flac or mp3) and is not corrupted. If reading from a remote "
 "URL, ensure that the URL is the full address to **download** the audio file."
)



If I grab any wav file I can do something like this :


import wave

with open('./emma.wav', 'rb') as fd:
 contents = fd.read()
 print(contents)



And running it through the function does work !


So my question would be :


How can I transform my mulaw encoded buffer data into a valid bytes format that works with
ffmpeg_read()
?

EDIT : I've found a way using pywav (https://pypi.org/project/pywav/)


# 1 stands for mono channel, 8000 sample rate, 8 bit, 7 stands 
for MULAW encoding
wave_write = pywav.WavWrite("filename.wav", 1, 8000, 8, 7)
wave_write.write(mu_encoded_data)

wave_write.close()



This is the result : https://github.com/boblp/mulaw_buffer_data/blob/main/filename.wav


the background noise is acceptable.


However, I want to use a FFMPEG instead to avoid creating a tmp file.


-
How to merge video and camera recording together in browser (Chrome especially) ?
5 juillet 2021, par lzl124631xGoal


I want to record/generate a video in browser (Chrome especially) with a custom video (e.g. .mp4, .webm) and camera recording side-by-side.


--------------------------------------------------
| | |
| Some Custom Video | My Camera |
| | |
--------------------------------------------------



What is working


I can use
MediaRecorder
to record my camera, and play the recording side-by-side with my video, and download the recorded video as a webm.

Challenge


I'm facing difficulty merging the video and camera recording into a single video file side-by-side.


My investigation


MultiStreamMixer


I first looked into MultiStreamMixer and built a demo with it (See codepen).




The issue with it is that it stretches the video content to fit them in the same size. I can specify different width/height for those two streams but it doesn't work as expected. My camera got cropped.




Custom Mixer


I took a look at the source code of MultiStreamMixer and found the issue was because of its simple layout logic. So I took its source code as a reference and build my custom mixer. See codepen.


The way it works :


- 

- We first render the streams one by one to an offscreen canvas.
- Capture the stream from the canvas as the output video stream
- Audio stream is generated separately using
AudioContext
,createMediaStreamSource
,createMediaStreamDestination
etc. - Merge the audio and video streams and output as a single stream.
- Use
MediaRecorder
to record the mixed stream.












It adds black margins to video/camera and won't stretch the videos.




However, I found the recording is very blurry if you wave your hand in front of your camera while recording.




Initially I thought it was because I didn't set some setting correctly to the canvas. But later I found that even the my MultiStreamMixer demo or the WebRTC demo (You can't see the text on the teapot clearly in the recording) generates blurry video with canvas.


I'm asking in webrtc group to see if I can get around this issue. Meanwhile I looked into
ffmpeg.js


ffmpeg.js


I think this would "work" but the file is too large. It's impratical to let the customer wait for this 23MB JS file to be downloaded.


Other ways that I haven't tried


The above are my investigations thus far.


Another idea is to play the video and recorded video side-by-side and use screen recording API to record the merged version. (Example). But this would require the customer to wait for the same amount of time as the initial recording to get the screen/tab recorded.


Uploading the video to server and doing the work in server would be my last resort.


-
Revisiting Nosefart and Discovering GME
30 mai 2011, par Multimedia Mike — Game HackingI found the following screenshot buried deep in an old directory structure of mine :
I tried to recall how this screenshot came to exist. Had I actually created a functional KDE frontend to Nosefart yet neglected to release it ? I think it’s more likely that I used some designer tool (possibly KDevelop) to prototype a frontend. This would have been sometime in 2000.
However, this screenshot prompted me to revisit Nosefart.
Nosefart Background
Nosefart is a program that can play Nintendo Sound Format (NSF) files. NSF files are files containing components that were surgically separated from Nintendo Entertainment System (NES) ROM dumps. These components contain the music playback engines for various games. An NSF player is a stripped down emulation system that can simulate the NES6502 CPU along with the custom hardware (2 square waves, 1 triangle wave, 1 noise generator, and 1 limited digital channel).Nosefart was written by Matt Conte and eventually imported into a Sourceforge project, though it has not seen any development since then. The distribution contains standalone command line players for Linux and DOS, a GTK frontend for the Linux command line version, and plugins for Winamp, XMMS, and CL-Amp.
The Sourceforge project page notes that Nosefart is also part of XBMC. Let the record show that Nosefart is also incorporated into xine (I did that in 2002, I think).
Upgrading the API
When I tried running the command line version of Nosefart under Linux, I hit hard against the legacy audio API : OSS. Remember that ?In fairly short order, I was able to upgrade the CL program to use PulseAudio. The program is not especially sophisticated. It’s a single-threaded affair which checks for a keypress, processes an audio frame, and sends the frame out to the OSS file interface. All that was needed was to rewrite open_hardware() and close_hardware() for PA and then replace the write statement in play(). The only quirk that stood out is that including <pulse/pulseaudio.h> is insufficient for programming PA’s simple API. <pulse/simple.h> must be included separately.
For extra credit, I adapted the program to ALSA. The program uses the most simplistic audio output API possible — just keep filling a buffer and sending it out to the DAC.
Discovering GME
I’m not sure what to do with the the program now since, during my research to attempt to bring Nosefart up to date, I became aware of a software library named Game Music Emu, or GME. It’s a pure C++ library that can essentially play any classic video game format you can possible name. Wow. A lot can happen in 10 years when you’re not paying attention.It’s such a well-written library that I didn’t need any tutorial or documentation to come up to speed. Just a quick read of the main gme.h header library enabled me in short order to whip up a quick C program that could play NSF and SPC files. Path of least resistance : Client program asks library to open a hardcoded file, synthesize 10 seconds of audio, and dump it into a file ; ask the FLAC command line program to transcode raw data to .flac file ; use ffplay to verify the results.
I might develop some other uses for this library.