
Recherche avancée
Autres articles (36)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (6218)
-
lavc/rawdec : Use AV_PIX_FMT_PAL8 for raw 1 bpp video in AVI
28 janvier 2016, par Mats Petersonlavc/rawdec : Use AV_PIX_FMT_PAL8 for raw 1 bpp video in AVI
From
https://msdn.microsoft.com/en-us/library/windows/desktop/dd318229%28v=vs.85%29.aspx:"If biCompression equals BI_RGB and the bitmap uses 8 bpp or less, the
bitmap has a color table immediatelly following the BITMAPINFOHEADER
structure. The color table consists of an array of RGBQUAD values. The
size of the array is given by the biClrUsed member. If biClrUsed is
zero, the array contains the maximum number of colors for the given
bitdepth ; that is, 2^biBitCount colors."Nothing about "monochrome" here. Unfortunately, pal8 to monow conversion
seems a bit flaky, but that’s another story.Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>
-
.NET Process - Redirect stdin and stdout without causing deadlock
21 juillet 2017, par user1150856I’m trying to encode a video file with FFmpeg from frames generated by my program and then redirect the output of FFmpeg back to my program to avoid having an intermediate video file.
However, I’ve run into what seems to be a rather common problem when redirecting outputs in System.Diagnostic.Process, mentioned in the remarks of the documentation here, which is that it causes a deadlock if run synchronously.
After tearing my hair out over this for a day, and trying several proposed solutions found online, I still cannot find a way to make it work. I get some data out, but the process always freezes before it finishes.
Here is a code snippet that produces said problem :
static void Main(string[] args)
{
Process proc = new Process();
proc.StartInfo.FileName = @"ffmpeg.exe";
proc.StartInfo.Arguments = String.Format("-f rawvideo -vcodec rawvideo -s {0}x{1} -pix_fmt rgb24 -r {2} -i - -an -codec:v libx264 -preset veryfast -f mp4 -movflags frag_keyframe+empty_moov -",
16, 9, 30);
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.RedirectStandardInput = true;
proc.StartInfo.RedirectStandardOutput = true;
FileStream fs = new FileStream(@"out.mp4", FileMode.Create, FileAccess.Write);
//read output asynchronously
using (AutoResetEvent outputWaitHandle = new AutoResetEvent(false))
{
proc.OutputDataReceived += (sender, e) =>
{
if (e.Data == null)
{
outputWaitHandle.Set();
}
else
{
string str = e.Data;
byte[] bytes = new byte[str.Length * sizeof(char)];
System.Buffer.BlockCopy(str.ToCharArray(), 0, bytes, 0, bytes.Length);
fs.Write(bytes, 0, bytes.Length);
}
};
}
proc.Start();
proc.BeginOutputReadLine();
//Generate frames and write to stdin
for (int i = 0; i < 30*60*60; i++)
{
byte[] myArray = Enumerable.Repeat((byte)Math.Min(i,255), 9*16*3).ToArray();
proc.StandardInput.BaseStream.Write(myArray, 0, myArray.Length);
}
proc.WaitForExit();
fs.Close();
Console.WriteLine("Done!");
Console.ReadKey();
}Currently i’m trying to write the output to a file anyway for debugging purposes, but this is not how the data will eventually be used.
If anyone knows a solution it would be very much appreciated.
-
Interesting behavior in Media Source Extensions
28 mai 2020, par newtonian_figI'm trying to build a fairly standard video player using Media Source Extensions ; however, I want the user to be able to control when the player moves on to a new video segment. For example, we might see the following behavior :



- 

- Video player plays 1st segment
- Source Buffer runs out of data causing the video to appear paused
- When the user is ready, they click a button that adds the 2nd segment to the Source Buffer
- The video continues by playing the 2nd segment











This works well, except that when the video appears paused during step 2 it doesn't stop at the last frame of the 1st segment. Instead, it stops two frames before the end of the 1st segment. Those last two frames aren't being dropped, they just get played after the user clicks the button to advance the video. This is an issue for my application, and I'm trying to figure out a way to make sure all of the frames from the 1st segment get played before the end of step 2.



I suspect that these last two frames are getting held up in the video decoder buffer. Especially since calling endOfStream() on my Media Source after adding the 1st segment to the Source Buffer causes the 1st segment to play all the way through with no frames left behind.



Additional Info



- 

- I created each video segment file from a series of PNGs using the following ffmpeg command





ffmpeg -i %04d.png -movflags frag_keyframe+empty_moov+default_base_moof video_segment.mp4



- 

- Maybe this is a clue ? End of stream situations not handled correctly (last frames are dropped)
- Another interesting thing to note is that if the video only has 2 frames or less, MSE doesn't play it at all.
- The browser I'm using is Chrome. The code for my MSE player is just taken from the Google Developers example, but I'll post it here for completeness. This code only covers up to step 2 since that's where the issue is.









<code class="echappe-js"><script>&#xA;const mediaSource = new MediaSource();&#xA;video.src = URL.createObjectURL(mediaSource);&#xA;mediaSource.addEventListener(&#x27;sourceopen&#x27;, sourceOpen, { once: true });&#xA;&#xA;function sourceOpen() {&#xA; URL.revokeObjectURL(video.src);&#xA; const sourceBuffer = mediaSource.addSourceBuffer(&#x27;video/mp4; codecs="avc1.64001f"&#x27;);&#xA; sourceBuffer.mode = &#x27;sequence&#x27;;&#xA;&#xA; // Fetch the video and add it to the Source Buffer&#xA; fetch(&#x27;https://s3.amazonaws.com/bucket_name/video_file.mp4&#x27;)&#xA; .then(response => response.arrayBuffer())&#xA; .then(data => sourceBuffer.appendBuffer(data));&#xA;}&#xA;&#xA;</code></pre>&#xA;