
Recherche avancée
Autres articles (15)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (1970)
-
How to create a video file webm from chunks by media recorder api using ffmpeg
17 octobre 2020, par Caio NakaiI'm trying to create a webm video file from blobs generated by MediaRecorderAPI in a NodeJS server using FFMPEG. I'm able to create the .webm file but it's not playable, I ran this command
$ ffmpeg.exe -v error -i lel.webm -f null - >error.log 2>&1
to generate an error log, the error log file contains this :



[null @ 000002ce7501de40] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 1 >= 1


[h264 @ 000002ce74a727c0] Invalid NAL unit size (804 > 74).


[h264 @ 000002ce74a727c0] Error splitting the input into NAL units.


Error while decoding stream #0:0 : Invalid data found when processing input




This is my web server code


const app = require("express")();
const http = require("http").createServer(app);
const io = require("socket.io")(http);
const fs = require("fs");
const child_process = require("child_process");

app.get("/", (req, res) => {
 res.sendFile(__dirname + "/index.html");
});

io.on("connection", (socket) => {
 console.log("a user connected");

 const ffmpeg = child_process.spawn("ffmpeg", [
 "-i",
 "-",
 "-vcodec",
 "copy",
 "-f",
 "flv",
 "rtmpUrl.webm",
 ]);

 ffmpeg.on("close", (code, signal) => {
 console.log(
 "FFmpeg child process closed, code " + code + ", signal " + signal
 );
 });

 ffmpeg.stdin.on("error", (e) => {
 console.log("FFmpeg STDIN Error", e);
 });

 ffmpeg.stderr.on("data", (data) => {
 console.log("FFmpeg STDERR:", data.toString());
 });

 socket.on("message", (msg) => {
 console.log("Writing blob! ");
 ffmpeg.stdin.write(msg);
 });

 socket.on("stop", () => {
 console.log("Stop recording..");
 ffmpeg.kill("SIGINT");
 });
});

http.listen(3000, () => {
 console.log("listening on *:3000");
});




And this is my client code, using HTML, JS :




 
 
 
 
 
 <code class="echappe-js"><script src='http://stackoverflow.com/socket.io/socket.io.js'></script>

<script>&#xA; const socket = io();&#xA; let mediaRecorder = null;&#xA; const startRecording = (someStream) => {&#xA; const mediaStream = new MediaStream();&#xA; const videoTrack = someStream.getVideoTracks()[0];&#xA; const audioTrack = someStream.getAudioTracks()[0];&#xA; console.log("Video trac ", videoTrack);&#xA; console.log("audio trac ", audioTrack);&#xA; mediaStream.addTrack(videoTrack);&#xA; mediaStream.addTrack(audioTrack);&#xA;&#xA; const recorderOptions = {&#xA; mimeType: "video/webm;codecs=h264",&#xA; videoBitsPerSecond: 3 * 1024 * 1024,&#xA; };&#xA;&#xA; mediaRecorder = new MediaRecorder(mediaStream, recorderOptions);&#xA; mediaRecorder.start(1000); // 1000 - the number of milliseconds to record into each Blob&#xA; mediaRecorder.ondataavailable = (event) => {&#xA; console.debug("Got blob data:", event.data);&#xA; if (event.data &amp;&amp; event.data.size > 0) {&#xA; socket.emit("message", event.data);&#xA; }&#xA; };&#xA; };&#xA;&#xA; const getVideoStream = async () => {&#xA; try {&#xA; const stream = await navigator.mediaDevices.getUserMedia({&#xA; video: true,&#xA; audio: true,&#xA; });&#xA; startRecording(stream);&#xA; myVideo.srcObject = stream;&#xA; } catch (e) {&#xA; console.error("navigator.getUserMedia error:", e);&#xA; }&#xA; };&#xA;&#xA; const stopRecording = () => {&#xA; mediaRecorder.stop();&#xA; socket.emit("stop");&#xA; };&#xA; </script>

 
hello world






 

<script>&#xA; const myVideo = document.getElementById("myvideo");&#xA; myVideo.muted = true;&#xA; </script>

 




Any help is appreciated !


-
Restarting an ffmpeg child_process ?
9 octobre 2020, par kinxI'm starting multiple RTSP streams with ffmpeg and child_process. Some of these RTSP streams do not connect and fail during the first connection and requires a constant retry of reconnection to get the stream to start.


Here's ffmpeg options :


setOptions() {
 const options = {
 "-rtsp_transport": "tcp",
 "-i": this.url,
 "-f": "mpegts",
 "-s": "1280x720",
 "-speed": "6",
 "-codec:v": "mpeg1video",
 "-codec:a": "mp2",
 "-stats": "",
 "-b:v": "1000k",
 "-ar": "44100",
 "-r": 30,
 "-bf": 0,
 "-ac": "1",
 "-b:a": "128k",
 "-muxdelay": "0.001",
 };
 let params = [];
 for (let key in options) {
 params.push(key);
 if (String(options[key]) !== "") {
 params.push(String(options[key]));
 }
 }
 params.push("-");
 return params;
 }



Here's the method I'm using to start the process and the "attempt" to restart it if the connection fails :


startStream() {
 const self = this;
 const wss = new WebSocket.Server({ port: this.port });
 const childSpawn = child_process.spawn(
 "ffmpeg",
 this.setOptions(this.url),
 {
 detached: true,
 }
 );
 childSpawn.stdout.on("data", (data) => {
 wss.clients.forEach((client) => {
 client.send(data);
 /// console.log(data);
 if (self.retries > 0) {
 console.log("-- STREAM HAS CONNECTED");
 }
 });
 self.child = childSpawn;
 });
 childSpawn.on("close", (code, signal) => {
 console.log("process closed");
 if (code === 1) {
 setTimeout(() => {
 console.log("retrying stream", self.url);
 self.startStream();
 }, 50000);
 }
 });
 }



The idea is to wait X amount of seconds and try to restart it again.


What is the correct and best way to do this ?


-
avformat/segment : Don't overwrite AVCodecParameters after init
6 septembre 2020, par Andreas Rheinhardtavformat/segment : Don't overwrite AVCodecParameters after init
The segment muxer copies the user-provided AVCodecParameters to the
newly created child streams in its init function before initializing the
child muxer ; and since commit 8e6478b723affe4d44f94d34b98e0c47f6a0b411,
it does this again before calling avformat_write_header() if that is
called from seg_write_header(). The reason for this is complicated :At that time writing the header was delayed, i.e. it was not triggered
by avformat_write_header() (unless the AVFMT_FLAG_AUTO_BSF was unset),
but instead by writing the very first packet. The rationale behind this
was to allow to run bitstream filters on the packets in the interleavement
queue in order to generate missing extradata from them before the muxer's
write_header function is actually called.The segment muxer went even further : It initialized the child muxer and
ran the child muxer's check_bitstream functions on the packets in its
own muxing queue and stole any bitstream filters that got inserted. The
reason for this is that the segment muxer has an option to write the
header to a separate file and for this it is needed to write the child
muxer's header without delay, but with correct extradata. Unsetting
AVFMT_FLAG_AUTO_BSF for the child muxer accomplished the first goal and
stealing the bitstream filters the second ; and in order for the child
muxer to actually use the updated extradata, the old AVCodecParameters
(set before avformat_init_output()) were overwritten with the new ones.Updating the extradata proceeded as follows : The bitstream filter itself
simply updated the AVBSFContext's par_out when processing a packet, in
violation of the new BSF API (where par_out may only be set in the init
function) ; the muxing code then simply forwarded the updated extradata,
overwriting the par_in of the next BSF in the BSF chain with the fresh
par_out of the last one and the AVStream's par with the par_out of the
last BSF. This was an API violation, too, of course, but it made
remuxing ADTS AAC into mp4/matroska work.But this no longer serves a useful purpose since the aac_adtstoasc BSF
was updated to propagate new extradata via packet side data in commit
f63c3516577d605e51cf16358cbdfa0bc97565d8 ; the next commit then removed
the code in mux.c passing new extradata along the filter chain. This
alone justifies removing the code for setting the AVCodecParameters a
second time.But there is even another reason to do so : It is harmful. The ogg muxer
parses the extradata of Theora and Vorbis in its init function and keeps
pointers to parts of it. Said pointers become dangling when the
extradata is overwritten by the segment muxer, leading to
use-after-frees as has happened in ticket #8881 which this commit fixes.Ticket #8517 is about another issue caused by this : Immediately after
having overwritten the old AVCodecParameters the segment muxer checks
whether the codec_tag is ok (the codec_tag is set generically when
initializing the child muxer based upon muxer-specific lists). The check
used is : If the child output format has such a list and if the codec tag
of the non-child stream does not match the codec id given the list of
codec tags and if there is a match for the codec id in the codec tag
list, then set the codec tag to zero (and not to the existing match),
otherwise set the codec tag of the child stream to the codec tag
of the corresponding stream of the main AVFormatContext (which is btw
redundant given that the child AVCodecParameters have just been
overwritten with the AVCodecParameters of the corresponding stream of
the main AVFormatContext).Reviewed-by : Ridley Combs <rcombs@rcombs.me>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>