
Recherche avancée
Autres articles (35)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...)
Sur d’autres sites (5952)
-
lavfi : add showwavespic filter
24 décembre 2014, par Clément Bœschlavfi : add showwavespic filter
This is a variant of showwaves. It is implemented as a different filter
so that the user is not allowed to use meaningless options which belong
to showwaves (such as rate).Major edits done by Stefano Sabatini, from a patch by ubitux.
See thread :
From : Clément Bœsch <u@pkh.me>
To : ffmpeg-devel@ffmpeg.org
Date : Wed, 24 Dec 2014 15:03:26 +0100
Subject : [FFmpeg-devel] [PATCH] avfilter/showwaves : add single_pic option -
aacenc_tns : rework coefficient quantization and filter application
1er septembre 2015, par Rostislav Pehlivanovaacenc_tns : rework coefficient quantization and filter application
This commit reworks the TNS implementation to a hybrid between what
the specifications say, what the decoder does and what’s the best
thing to do.The filter application function was copied from the decoder and
modified such that it applies the inverse AR filter to the
coefficients. The LPC coefficients themselves are fed into the
same quantization expression that the specifications say should
be used however further processing is not done, instead they’re
converted to the form that the decoder expects them to be in
and are sent off to the compute_lpc_coeffs function exactly the
way the decoder does. This function does all conversions and will
return the exact coefficients that the decoder will generate, which
are then applied to the coefficients.
Having the exact same coefficients on both the encoder and decoder
is a must since otherwise the entire sfb’s over which the filter
is applied will be attenuated.Despite this major rework, TNS might not work fine on some audio
types at very low bitrates (e.g. sub 90kbps) as it can attenuate
some coefficients too much. Users are advised to experiment with
TNS at higher bitrates if they wish to use this tool or simply
wait for the implementation to be improved.Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>
-
How do I send a mediaStream from the electron renderer process to a background ffmpeg process ?
26 juillet 2020, par Samamoma_VadakopaGoal (to avoid the XY problem) :


I'm building a small linux desktop application using webRTC, electron, and create-react-app. The application should receive a mediaStream via a webRTC peer connection, display the stream to the user, create a virtual webcam device, and send the stream to the virtual webcam so it can be selected as the input on most major videoconferencing platforms.


Problem :


The individual parts all work : receiving the stream (webRTC), creating the webcam device (v4l2loopback), creating a child process of ffmpeg from within electron, passing the video stream to the ffmpeg process, streaming the video to the virtual device using ffmpeg, and selecting the virtual device and seeing the video stream in a videoconference meeting.


But I'm currently stuck on tying the parts together.
The problem is, the mediaStream object is available inside electron's renderer process (as state in a deeply nested react component, FWIW). As far as I can tell, I can only create a node.js child process of ffmpeg from within electron's main process. That implies that I need to get the mediaStream from the renderer to the main process. To communicate between processes, electron uses an IPC system. Unfortunately, it seems that IPC doesn't support sending a complex object like a video stream.


What I've tried :


- 

-
starting ffmpeg child process (using child_process.spawn) from within renderer process throws 'fs.fileexistssync' error. Browsing SO indicates that only the main process can start these background processes.


-
creating separate webRTC connection between renderer and main to re-stream the video. I'm using IPC to facilitate the connection, but offer/answer descriptions aren't reaching the other peer over IPC - my guess is this is due to the same limitations on IPC as before.








My next step is to create a separate node server on app startup which ingests the incoming RTC stream and rebroadcasts it to the app's renderer process, as well as to a background ffmpeg process.


Before I try that, though, does anyone have suggestions for approaches I should consider ? (this is my first SO question, so any advice on how to improve it is appreciated).


-