
Recherche avancée
Médias (3)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (104)
-
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)
Sur d’autres sites (7718)
-
Using ffmpeg.exe making the computer slow in task manager it takes over 1GB of memory how can i fix it ?
29 mai 2013, par Revuen Ben DrorIn mt Form1 i have this code :
private void StartRecording_Click(object sender, EventArgs e)
{
ffmp.Start("test.avi", 25);
timer1.Enabled = true;
}ffmp is a variable of my class : Ffmpeg
In this class i add frames to a pipe and create an avi file.using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Drawing;
using System.IO.Pipes;
using System.Runtime.InteropServices;
using System.Diagnostics;
namespace ScreenVideoRecorder
{
class Ffmpeg
{
NamedPipeServerStream p;
String pipename = "mytestpipe";
byte[] b;
System.Diagnostics.Process process;
public Ffmpeg()
{
}
public void Start(string FileName, int BitmapRate)
{
p = new NamedPipeServerStream(pipename, PipeDirection.Out, 1, PipeTransmissionMode.Byte);
b = new byte[1920 * 1080 * 3]; // some buffer for the r g and b of pixels of an image of size 720p
process = new System.Diagnostics.Process();
process.StartInfo.FileName = @"D:\pipetest\pipetest\ffmpegx86\ffmpeg.exe";
process.EnableRaisingEvents = false;
process.StartInfo.WorkingDirectory = @"D:\pipetest\pipetest\ffmpegx86";
process.StartInfo.Arguments = @"-f rawvideo -pix_fmt bgr0 -video_size 1920x1080 -i \\.\pipe\mytestpipe -map 0 -c:v libx264 -r " + BitmapRate + " " + FileName;
process.Start();
process.StartInfo.UseShellExecute = false;
process.StartInfo.CreateNoWindow = false;
p.WaitForConnection();
}
public void PushFrame(Bitmap bmp)
{
int length;
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
//Rectangle rect = new Rectangle(0, 0, 1280, 720);
System.Drawing.Imaging.BitmapData bmpData =
bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly,
bmp.PixelFormat);
int absStride = Math.Abs(bmpData.Stride);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
//length = 3 * bmp.Width * bmp.Height;
length = absStride * bmpData.Height;
byte[] rgbValues = new byte[length];
int j = bmp.Height - 1;
for (int i = 0; i < bmp.Height; i++)
{
IntPtr pointer = new IntPtr(bmpData.Scan0.ToInt32() + (bmpData.Stride * j));
System.Runtime.InteropServices.Marshal.Copy(pointer, rgbValues, absStride * (bmp.Height - i - 1), absStride);
j--;
}
p.Write(rgbValues, 0, length);
bmp.UnlockBits(bmpData);
public void Close()
{
p.Close();
}
}
}The problem is when i'm running my application in this from my visual studio 2012 pro and click the button it's openning a console window and start the processing .
I tracked over it through the task manager on ffmpeg.exe and saw that it started from 996mb and very quick jumped ot 1040mb memory usage. The cpu usage was only 16%
Once i close ended this task the ffmpeg.exe everything was back to move smooth.
When it's working and i tried to drag my Form around the screen for example it was moving slow and also with some stuttering .Closed the ffmpeg.exe and i could drag the Form around smooth and quick like in regular way as it should be.
I tried to google for it and found some others with the same problem i think but i'm not sure where is the problem and how to fix it.
I'm not sure what version of ffmpeg.exe i have i'm using now but i read in some places that it's not working better with new versions but maybe i mistake here .
My windows is 8 with 6gb of ram .
-
avformat/matroskaenc : Actually apply timestamp offset for Opus
31 août 2022, par Andreas Rheinhardtavformat/matroskaenc : Actually apply timestamp offset for Opus
Matroska generally requires timestamps to be nonnegative, but
there is an exception : Data that corresponds to encoder delay
and is not supposed to be output anyway can have a negative
timestamp. This is achieved by using the CodecDelay header
field : The demuxer has to subtract this value from the raw
(nonnegative) timestamps of the corresponding track.
Therefore the muxer has to add this value first to write
this raw timestamp.Support for writing CodecDelay has been added in FFmpeg commit
d92b1b1babe69268971863649c225e1747358a74 and in Libav commit
a1aa37dd0b96710d4a17718198a3f56aea2040c1. The former simply
wrote the header field and did not apply any timestamp offsets,
leading to desynchronisation (if one uses multiple tracks).
The latter applied it at two places, but not at the one where
it actually matters, namely in mkv_write_block(), leading to
the same desynchronisation as with the former commit. It furthermore
used the wrong stream timebase to convert the delay to the
stream's timebase, as the conversion used the timebase from
before avpriv_set_pts_info().When the latter was merged in 82e4f39883932c1b1e5c7792a1be12dec6ab603d,
it was only done in a deactivated state that still did not
offset the timestamps when muxing due to "assertion failures
and av sync errors". a1aa37dd0b96710d4a17718198a3f56aea2040c1
made it definitely more likely to run into assertion failures
(namely if the relative block timestamp doesn't fit into an int16_t).Yet all of the above issues have been fixed (in commits
962d63157322466a9a82f9f9d84c1b6f1b582f65,
5d3953a5dcfd5f71391b7f34908517eb6f7e5146 and
4ebeab15b037a21f195696cef1f7522daf42f3ee. This commit therefore
enables applying CodecDelay, fixing ticket #7182.There is just one slight regression from this : If one has input
with encoder delay where the first timestamp is negative, but
the pts of the part of the data that is actually intended to be
output is nonnegative, then the timestamps will currently by default
be shifted to make them nonnegative before they reach the muxer ;
the muxer will then ensure that the shifted timestamps are retained.
Before this commit, the muxer did not ensure this ; instead the
timestamps that the demuxer will output were shifted and
if the first timestamp of the actually intended output was zero
before shifting, then this unintentional shift just cancels
the shift performed before the packet reached the muxer.
(But notice that this only applies if all the tracks use the same
CodecDelay, or the relative sync between tracks will be impaired.)
This happens in the matroska-opus-remux and matroska-ogg-opus-remux
FATE tests. Future commits will forward the information that
the Matroska muxer has a limited capability to handle negative
timestamps so that the shifting in libavformat can take advantage
of it.Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
-
Live AAC and H264 data into live stream
10 mai 2024, par tzulegerI have a remote camera that captures H264 encoded video data and AAC encoded audio data, places the data into a custom ring buffer, which then is sent to a Node.js socket server, where the packet of information is detected as audio or video and then handled accordingly. That data should turn into a live stream, the protocol doesn't matter, but the delay has to be around 4 seconds and can be played on iOS and Android devices.


After reading hundreds of pages of documentation, questions, or solutions on the internet, I can't seem to find anything about handling two separate streams of AAC and H264 data to create a live stream.


Despite attempting many different ways of achieving this goal, even having a working implementation of HLS, I want to revisit ALL options of live streaming, and I am hoping someone out there can give me advice or guidance to specific documentation on how to achieve this goal.


To be specific, this is our goal :


- 

- Stream AAC and H264 data from remote cellular camera to a server which will do some work on that data to live stream to one user (possibly more users in the future) on a mobile iOS or Android device
- Delay of the live stream should be a maximum of 4 seconds, if the user has bad signal, then a longer delay is okay, as we obviously cannot do anything about that.
- We should not have to re-encode our data. We've explored WebRTC, but that requires OPUS audio packets and thus requires us to re-encode the data, which would be expensive for our server to run.








Any and all help, ranging from re-visiting an old approach we took to exploring new ones, is appreciated.


I can provide code snippets as well for our current implementation of LLHLS if it helps, but I figured this post is already long enough.


I've tried FFmpeg with named pipes, I expected it to just work, but FFmpeg kept blocking on the first named pipe input. I thought of just writing the data out to two files and then using FFmpeg, but it's continuous data and I don't have enough knowledge on FFmpeg on how I could use that type of implementation to create one live stream.


I've tried implementing our own RTSP server on the camera using Gstreamer (our camera had its RTSP server stripped out, wasn't my call) but the camera's flash storage cannot handle having GStreamer on it, so that wasn't an option.


My latest attempt was using a derivation of hls-parser to create an HLS manifest and mux.js to create MP4 containers for
.m4s
fragmented mp4 files and do an HLS live stream. This was my most successful attempt, where we successfully had a live stream going, but the delay was up to 16 seconds, as one would expect with HLS live streaming. We could drop the target duration down to 2 seconds and get about 6-8 seconds delay, but this could be unreliable, as these cameras could have no signal making it relatively expensive to send so many IDR frames with such low bandwidth.

With the delay being the only factor left, I attempted to upgrade the implementation to support Apple's Low Latency HLS. It seems to work, as the right partial segments are getting requested and everything that makes LLHLS is working as intended, but the delay isn't going down when played on iOS' native AVPlayer, as a matter of fact, it looks like it worsened.


I would also like to disclaim, my knowledge on media streaming is fairly limited. I've learned most of what I speak of in this post over the past 3 months by reading RFCs, documentation, and stackoverflow/reddit questions and answers. If anything appears to be confusing, it might be just my lack of understanding of it.