
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (53)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (10600)
-
Minimal sample of muxing two streams with no reencoding (av_interleaved_write_frame fails)
19 juillet 2022, par AlveinWhat I'm trying to do : having two files, one is video-only and the other is audio-only, with identical durations, I want to "join" them in a single container.


I previously made a routine which just copied all the streams inside a container to another one. No reencoding, etc. This works perfectly :


while(true) {
 pkIn=av_packet_alloc();
 if(NULL==pkIn) {
 fprintf(stderr,"av_packet_alloc() failed");
 break;
 }
 iError=av_read_frame(fcIn,pkIn);
 if(0>iError)
 if(AVERROR_EOF==iError)
 break;
 else {
 fprintf(stderr,"av_read_frame() failed");
 break;
 }
 stIn=fcIn->streams[pkIn->stream_index];
 stOut=fcOut->streams[pkIn->stream_index];
 log_packet(fcIn,pkIn,"in");
 av_packet_rescale_ts(pkIn,stIn->time_base,stOut->time_base);
 pkIn->pos=-1;
 log_packet(fcOut,pkIn,"out");
 iError=av_interleaved_write_frame(fcOut,pkIn);
 if(0>iError) {
 fprintf(stderr,"av_interleaved_write_frame() failed");
 break;
 }
 av_packet_free(&pkIn);
}



I just did the analogy and tried to do the same, but taking each stream from a distinct container, like this :


while(true) {
 if(!bVideoInEOF) {
 pkVideoIn=av_packet_alloc();
 if(NULL==pkVideoIn) {
 fprintf(stderr,"av_packet_alloc(video in) failed");
 break;
 }
 iError=av_read_frame(fcVideoIn,pkVideoIn);
 if(0>iError)
 if(AVERROR_EOF==iError)
 bVideoInEOF=true;
 else {
 fprintf(stderr,"av_read_frame(video in) failed");
 break;
 }
 if(!bVideoInEOF) {
 log_packet(fcVideoIn,pkVideoIn,"video in");
 av_packet_rescale_ts(pkVideoIn,stVideoIn->time_base,stVideoOut->time_base);
 pkVideoIn->pos=-1;
 pkVideoIn->stream_index=stVideoOut->index; // Edit (2022-07-19)
 log_packet(fcVideoIn,pkVideoIn,"video out");
 iError=av_interleaved_write_frame(fcOut,pkVideoIn);
 if(0>iError) {
 fprintf(stderr,"av_interleaved_write_frame(video out) failed");
 break;
 }
 }
 av_packet_free(&pkVideoIn);
 }
 if(!bAudioInEOF) {
 pkAudioIn=av_packet_alloc();
 if(NULL==pkAudioIn) {
 fprintf(stderr,"av_packet_alloc(audio in) failed");
 break;
 }
 iError=av_read_frame(fcAudioIn,pkAudioIn);
 if(0>iError)
 if(AVERROR_EOF==iError)
 bAudioInEOF=true;
 else {
 fprintf(stderr,"av_read_frame(audio in) failed");
 break;
 }
 if(!bAudioInEOF) {
 log_packet(fcAudioIn,pkAudioIn,"audio in");
 av_packet_rescale_ts(pkAudioIn,stAudioIn->time_base,stAudioOut->time_base);
 pkAudioIn->pos=-1;
 pkAudioIn->stream_index=stAudioOut->index; // Edit (2022-07-19)
 log_packet(fcAudioIn,pkAudioIn,"audio out");
 iError=av_interleaved_write_frame(fcOut,pkAudioIn);
 if(0>iError) {
 fprintf(stderr,"av_interleaved_write_frame(audio out) failed");
 break;
 }
 }
 av_packet_free(&pkAudioIn);
 }
 if(bVideoInEOF&&bAudioInEOF)
 break;
}



I know the previous code looks like redundant but I wanted to leave both streams "processing" separated the way you understand my plans.


Anyway, that code ends quickly with "av_interleaved_write_frame(audio out) failed".


The error detail is "Invalid argument", and the debugger shows this :




Application provided invalid, non monotonically increasing dts to
muxer in stream 0.




If I disable any of the main blocks "if(!bVideoInEOF)" / "if(!bAudioInEOF)", the file is written successfully, with the obvious lack of the disabled stream.


I'm new into using this library so probably I'm doing something really stupid, or missing something obvious.


Suggestions ?


Edit (2022-07-19) :


By checking the logs, I noticed I was writing every frame to the stream #0. Hence, the horrible jumps in PTS/DTS.


Code edited by adding the corresponding "...->stream_index=" before each call to av_interleaved_write_frame().


...


Though it works, I still think my code is far from perfect. Comments are welcome.


-
How to send encoded video (or audio) data from server to client in a way that's decodable by webcodecs API using minimal latency and data overhead
11 janvier 2023, par Tiger YangMy question (read entire post for context) :


Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this.


Context and problem :


I'm working on a remote desktop implementation that consists of a server that captures and encodes the display and speakers using FFmpeg and forwards it via pipe to a go (language) program which sends it on two unidirectional webtransport streams to my client, which I plan to decode using the webcodecs API. According to MDN, the video decoder needs to be fed via .configure() an object containing the following : https://developer.mozilla.org/en-US/docs/Web/API/VideoDecoder/configure before it's able to decode anything.


same goes for the audio decoder : https://developer.mozilla.org/en-US/docs/Web/API/AudioDecoder/configure


What I've tried so far :


Because this remote desktop will be for my personal use only, it would only ever receive streams from a specific encoder configured in a specific way encoding video at a specific resolution, framerate, color space, etc.. Therefore, I took my video capture FFmpeg command...


videoString := []string{
 "ffmpeg",
 "-init_hw_device", "d3d11va",
 "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",
 "-vcodec", "hevc_nvenc",
 "-tune", "ll",
 "-preset", "p7",
 "-spatial_aq", "1",
 "-temporal_aq", "1",
 "-forced-idr", "1",
 "-rc", "cbr",
 "-b:v", "500K",
 "-no-scenecut", "1",
 "-g", "216000",
 "-f", "hevc", "-",
 }



...and instructed it to write to an mp4 file instead of outputting to pipe, and then I had this webcodecs demo https://w3c.github.io/webcodecs/samples/video-decode-display/ demux it using mp4box.js. Knowing that the demo outputs a proper .configure() object, I blindly copied it and had my client configure using that every time. Sadly, it didn't work, and I since noticed that the "description" part of the configure object changes despite the encoder and parameters being the same.


I knew that mp4 files worked via mp4box, but they can't be streamed with low latency over a network, and additionally, ffmpeg's -f parameters specifies the muxer to use, but there are so many different types.


At this point, I think I'm completely out of my depth, so :


Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this. (copied above)


-
How to extract specific audio track (track 2) from mp4 file using ffmpeg ?
3 mai 2019, par flashI am working on a mp4 file (36017P.mp4) in which I want to extract
Track 2 -[English]
using ffmpeg.I tried with the following command on terminal but it seems to extract
Track 1 - [English]
:ffmpeg -i 36017P.mp4 filename.mp3
Problem Statement :
I am wondering what changes I need to make in the ffmpeg command above so that it extract
Track 2 -[English]
from mp4 file.