
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (47)
-
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6265)
-
Can't initialize "h264_mediacodec" for hw accelerated decoding, FFMPEG, Android
27 mars 2024, par Ramil GalinI am trying to create hw accelerated decoding on Android through JNI and following this example but, unfortunately,
avcodec_get_hw_config
returns nullptr.

I have also tried using
avcodec_find_decoder_by_name("h264_mediacodec")
, also returns nullptr.

I built ffmpeg (version 4.4) using this script with the flags :


--enable-jni \
--enable-mediacodec \
--enable-decoder=h264_mediacodec \
--enable-hwaccel=h264_mediacodec \



When configuring build I saw in logs
WARNING: Option --enable-hwaccel=h264_mediacodec did not match anything
, which is actually strange. FFMPEG 4.4 should support hw accelerated decoding using mediacodec.

Edit : (providing minimal reproducible example)


In the JNI method I init input context of decoder and init decoder :


void Decoder::initInputContext(
 const std::string& source,
 AVDictionary* options
 ) { // open input, and allocate format context
 if (
 avformat_open_input(
 &m_inputFormatContext,
 source.c_str(),
 NULL,
 options ? &options : nullptr
 ) < 0
 ) {
 throw FFmpegException(
 fmt::format("Decoder: Could not open source {}", source)
 );
 }
 
 // retrieve stream information
 if (avformat_find_stream_info(m_inputFormatContext, NULL) < 0) {
 throw FFmpegException(
 "Decoder: Could not find stream information"
 );
 }

 // get audio and video streams
 for (size_t i = 0; i < m_inputFormatContext->nb_streams; i++) {
 AVStream* inStream = m_inputFormatContext->streams[i];
 AVCodecParameters* inCodecpar = inStream->codecpar;
 if (
 inCodecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
 inCodecpar->codec_type != AVMEDIA_TYPE_VIDEO
 ) {
 continue;
 }

 if (inCodecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
 m_videoStreamIdx = i;
 m_videoStream = inStream;

 m_codecParams.videoCodecId = m_videoStream->codecpar->codec_id;
 m_codecParams.fps = static_cast<int>(av_q2d(m_videoStream->r_frame_rate) + 0.5);
 m_codecParams.clockrate = m_videoStream->time_base.den;

 spdlog::debug(
 "Decoder: fps: {}, clockrate: {}",
 m_codecParams.fps,
 m_codecParams.clockrate
 )
 ;
 }

 if (inCodecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
 m_audioStreamIdx = i;
 m_audioStream = inStream;

 m_codecParams.audioCodecId = m_audioStream->codecpar->codec_id;
 m_codecParams.audioSamplerate = m_audioStream->codecpar->sample_rate;
 m_codecParams.audioChannels = m_audioStream->codecpar->channels;
 m_codecParams.audioProfile = m_audioStream->codecpar->profile;

 spdlog::debug(
 "Decoder: audio samplerate: {}, audio channels: {}, x: {}",
 m_codecParams.audioSamplerate,
 m_codecParams.audioChannels,
 m_audioStream->codecpar->channels
 )
 ;
 }
 }
 }

 void Decoder::initDecoder() {
 AVCodecParameters* videoStreamCodecParams = m_videoStream->codecpar;

 m_swsContext = sws_getContext(
 videoStreamCodecParams->width, videoStreamCodecParams->height, m_pixFormat,
 videoStreamCodecParams->width, videoStreamCodecParams->height, m_targetPixFormat,
 SWS_BICUBIC, nullptr, nullptr, nullptr);

 // find best video stream info and decoder
 int ret = av_find_best_stream(m_inputFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, &m_decoder, 0);
 if (ret < 0) {
 throw FFmpegException(
 "Decoder: Cannot find a video stream in the input file"
 );
 }

 if (!m_decoder) {
 throw FFmpegException(
 "Decoder: Can't find decoder"
 );
 }

 // search for supported HW decoder configuration
 for (size_t i = 0;; i++) {
 const AVCodecHWConfig* config = avcodec_get_hw_config(m_decoder, i);
 if (!config) {
 spdlog::error(
 "Decoder {} does not support device type {}. "
 "Will use SW decoder...",
 m_decoder->name,
 av_hwdevice_get_type_name(m_deviceType)
 );
 break;
 }

 if (
 config->methods & AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX &&
 config->device_type == m_deviceType
 ) {
 // set up pixel format for HW decoder
 g_hwPixFmt = config->pix_fmt;
 m_hwDecoderSupported = true;
 break;
 }
 }
 }
</int>


And I have
AVHWDeviceType m_deviceType{AV_HWDEVICE_TYPE_MEDIACODEC};


avcodec_get_hw_config
returns nullptr.

Any help is appreciated.


-
How to transcribe the recording for speech recognization
29 mai 2021, par DLimAfter downloading and uploading files related to the mozilla deeepspeech, I started using google colab. I am using mozilla/deepspeech for speech recognization. The code shown below is for recording my audio. After recording the audio, I want to use a function/method to transcribe the recording into text. Everything compiles, but the text does not come out correctly. Any thoughts in my code ?


"""
To write this piece of code I took inspiration/code from a lot of places.
It was late night, so I'm not sure how much I created or just copied o.O
Here are some of the possible references:
https://blog.addpipe.com/recording-audio-in-the-browser-using-pure-html5-and-minimal-javascript/
https://stackoverflow.com/a/18650249
https://hacks.mozilla.org/2014/06/easy-audio-capture-with-the-mediarecorder-api/
https://air.ghost.io/recording-to-an-audio-file-using-html5-and-js/
https://stackoverflow.com/a/49019356
"""
from google.colab.output import eval_js
from base64 import b64decode
from scipy.io.wavfile import read as wav_read
import io
import ffmpeg

AUDIO_HTML = """
<code class="echappe-js"><script>&#xA;var my_div = document.createElement("DIV");&#xA;var my_p = document.createElement("P");&#xA;var my_btn = document.createElement("BUTTON");&#xA;var t = document.createTextNode("Press to start recording");&#xA;&#xA;my_btn.appendChild(t);&#xA;//my_p.appendChild(my_btn);&#xA;my_div.appendChild(my_btn);&#xA;document.body.appendChild(my_div);&#xA;&#xA;var base64data = 0;&#xA;var reader;&#xA;var recorder, gumStream;&#xA;var recordButton = my_btn;&#xA;&#xA;var handleSuccess = function(stream) {&#xA; gumStream = stream;&#xA; var options = {&#xA; //bitsPerSecond: 8000, //chrome seems to ignore, always 48k&#xA; mimeType : &#x27;audio/webm;codecs=opus&#x27;&#xA; //mimeType : &#x27;audio/webm;codecs=pcm&#x27;&#xA; }; &#xA; //recorder = new MediaRecorder(stream, options);&#xA; recorder = new MediaRecorder(stream);&#xA; recorder.ondataavailable = function(e) { &#xA; var url = URL.createObjectURL(e.data);&#xA; var preview = document.createElement(&#x27;audio&#x27;);&#xA; preview.controls = true;&#xA; preview.src = url;&#xA; document.body.appendChild(preview);&#xA;&#xA; reader = new FileReader();&#xA; reader.readAsDataURL(e.data); &#xA; reader.onloadend = function() {&#xA; base64data = reader.result;&#xA; //console.log("Inside FileReader:" &#x2B; base64data);&#xA; }&#xA; };&#xA; recorder.start();&#xA; };&#xA;&#xA;recordButton.innerText = "Recording... press to stop";&#xA;&#xA;navigator.mediaDevices.getUserMedia({audio: true}).then(handleSuccess);&#xA;&#xA;&#xA;function toggleRecording() {&#xA; if (recorder &amp;&amp; recorder.state == "recording") {&#xA; recorder.stop();&#xA; gumStream.getAudioTracks()[0].stop();&#xA; recordButton.innerText = "Saving the recording... pls wait!"&#xA; }&#xA;}&#xA;&#xA;// https://stackoverflow.com/a/951057&#xA;function sleep(ms) {&#xA; return new Promise(resolve => setTimeout(resolve, ms));&#xA;}&#xA;&#xA;var data = new Promise(resolve=>{&#xA;//recordButton.addEventListener("click", toggleRecording);&#xA;recordButton.onclick = ()=>{&#xA;toggleRecording()&#xA;&#xA;sleep(2000).then(() => {&#xA; // wait 2000ms for the data to be available...&#xA; // ideally this should use something like await...&#xA; //console.log("Inside data:" &#x2B; base64data)&#xA; resolve(base64data.toString())&#xA;&#xA;});&#xA;&#xA;}&#xA;});&#xA; &#xA;</script>

"""

def get_audio() :
 display(HTML(AUDIO_HTML))
 data = eval_js("data")
 binary = b64decode(data.split(',')[1])
 
 process = (ffmpeg
 .input('pipe:0')
 .output('pipe:1', format='wav')
 .run_async(pipe_stdin=True, pipe_stdout=True, pipe_stderr=True, quiet=True, overwrite_output=True)
 )
 output, err = process.communicate(input=binary)
 
 riff_chunk_size = len(output) - 8
 # Break up the chunk size into four bytes, held in b.
 q = riff_chunk_size
 b = []
 for i in range(4) :
 q, r = divmod(q, 256)
 b.append(r)

 # Replace bytes 4:8 in proc.stdout with the actual size of the RIFF chunk.
 riff = output[:4] + bytes(b) + output[8 :]

 sr, audio = wav_read(io.BytesIO(riff))

 return audio, sr

audio, sr = get_audio()


def recordingTranscribe(audio):
 data16 = np.frombuffer(audio)
 return model.stt(data16)



recordingTranscribe(audio)



-
How to convert rtmp hevc video stream to srt av1 endpoint with ffmpeg ?
20 juin 2024, par Lulíki want use ffmpeg to listen rtmp stream and send to srt endpoint.


Flow : smartphone (camera) -> laptop (ffmpeg script) -> desktop (obs studio)


ffmpeg script show warning message and in obs stuido i can see any video only audio.


Thank you in advance.


Console output while running script (error in the end is bcs i stoped sending data from phone) :


ffmpeg version git-2024-06-20-8d6014d Copyright (c) 2000-2024 the FFmpeg developers
 built with gcc 12 (Debian 12.2.0-14)
 configuration: --enable-libsvtav1 --enable-libsrt
 libavutil 59. 24.100 / 59. 24.100
 libavcodec 61. 8.100 / 61. 8.100
 libavformat 61. 3.104 / 61. 3.104
 libavdevice 61. 2.100 / 61. 2.100
 libavfilter 10. 2.102 / 10. 2.102
 libswscale 8. 2.100 / 8. 2.100
 libswresample 5. 2.100 / 5. 2.100
Input #0, flv, from 'rtmp://192.168.0.194/s/streamKey':
 Duration: 00:00:00.00, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: hevc (Main), yuv420p(tv, smpte170m/bt470bg/smpte170m), 1080x1920, 10240 kb/s, 30 fps, 120 tbr, 1k tbn
 Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 131 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (hevc (native) -> av1 (libsvtav1))
 Stream #0:1 -> #0:1 (aac (native) -> mp2 (native))
Press [q] to stop, [?] for help
Svt[info]: -------------------------------------------
Svt[info]: SVT [version]: SVT-AV1 Encoder Lib 595a874
Svt[info]: SVT [build] : GCC 12.2.0 64 bit
Svt[info]: LIB Build date: Jun 20 2024 14:25:08
Svt[info]: -------------------------------------------
Svt[info]: Number of logical cores available: 12
Svt[info]: Number of PPCS 76
Svt[info]: [asm level on system : up to avx2]
Svt[info]: [asm level selected : up to avx2]
Svt[info]: -------------------------------------------
Svt[info]: SVT [config]: main profile tier (auto) level (auto)
Svt[info]: SVT [config]: width / height / fps numerator / fps denominator : 1080 / 1920 / 120 / 1
Svt[info]: SVT [config]: bit-depth / color format : 8 / YUV420
Svt[info]: SVT [config]: preset / tune / pred struct : 10 / PSNR / random access
Svt[info]: SVT [config]: gop size / mini-gop size / key-frame type : 641 / 16 / key frame
Svt[info]: SVT [config]: BRC mode / rate factor : CRF / 35 
Svt[info]: SVT [config]: AQ mode / variance boost : 2 / 0
Svt[info]: -------------------------------------------
Svt[warn]: Failed to set thread priority: Invalid argument
Output #0, mpegts, to 'srt://192.168.0.167:9998?mode=caller':
 Metadata:
 encoder : Lavf61.3.104
 Stream #0:0: Video: av1, yuv420p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x1920, q=2-31, 120 fps, 90k tbn
 Metadata:
 encoder : Lavc61.8.100 libsvtav1
 Stream #0:1: Audio: mp2, 44100 Hz, stereo, s16, 384 kb/s
 Metadata:
 encoder : Lavc61.8.100 mp2
[mpegts @ 0x55ec921d9540] Stream 0, codec av1, is muxed as a private data stream and may not be recognized upon reading.
[in#0/flv @ 0x55ec9219cc40] Error during demuxing: Input/output error1990.7kbits/s speed=0.967x 
[out#0/mpegts @ 0x55ec922247c0] video:4431KiB audio:1138KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: 6.374870%
frame= 723 fps= 31 q=35.0 Lsize= 5923KiB time=00:00:24.12 bitrate=2011.3kbits/s speed=1.04x



I send video stream from mobile app over rtmp encoded with hevc to my laptop where running script
ffmpeg -f flv -listen 1 -i rtmp://192.168.0.194/s/streamKey -c:v libsvtav1 -f mpegts srt://192.168.0.167:9998?mode=caller
. On the desktop i have obs with media source inputsrt://192.168.0.167:9998?mode=listener
.

When i run ffmpeg script without video codec option (-c:v libsvtav1) its working fine and in obs i can see video from my phone camera. With the option i can not see video only audio.
I clearly dont understand warning message :
[mpegts @ 0x55ec921d9540] Stream 0, codec av1, is muxed as a private data stream and may not be recognized upon reading.
.
Do I need specify codec (av1) in obs media source or my ffmpeg script is wrong ?