
Recherche avancée
Autres articles (49)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (7920)
-
FFMPEG bottleneck in relaying data from a dshow camera to stdout PIPE without any processing or conversion
19 août 2020, par koonyookI have a USB camera (FSCAM_CU135) that can encode the video to MJPEG internally and it supports DirectShow. My goal is to retrieve the binary stream of the encoded video as it is (without decoding or preview) and send it to my program for further processing.


I choose to use FFMPEG to read the MJPEG stream and pipe to stdout so that I can read it using Python's subprocess.Popen .


ffmpeg -y -f dshow -vsync 2 -rtbufsize 1000M -video_size 1920x1440 -vcodec mjpeg -i video="FSCAM_CU135" -vcodec copy -f mjpeg pipe:1



At this resolution, the camera is able to capture and transmit at 60 fps.
In this case, I expect FFMPEG to pass the data as fast as possible with no calculation.
With the output of FFMPEG I can tell how fast it moves the data from rtbuffer to the output pipe.


With just one camera, FFMPEG works with no problem and move the data at 60 fps.
However, when I run 2 cameras simultaneously, the cameras still generate data at 60 fps but FFMPEG can only move the data around 55 fps. This means that I am unable to consume the video in realtime and the buffer consumption will be larger over time.


I guess that FFMPEG didn't just simply move the data but did some processing such as searching for the beginning, the end, and the timestamp of each video frame so that it can count frames and report.
Is there a way to force FFMPEG to not doing those things and focus on passing the data only to make it faster ?


If I purely use directshow API without FFMPEG, can it be faster ?


-
Encoding a Data Stream Alongside Video in FFMPEG
11 octobre 2016, par AdmiralJonBI’m wanting to encode some proprietary data (it’s a serialised unsigned 64-bit integer per frame) into a video container (mp4) as a data stream, but I have been unable to find any instructions/tutorials of anyone doing this.
The only thing I’ve been able to find is this, which describes potentially how to create a data stream (the user had no success apparently
https://lists.libav.org/pipermail/ffmpeg-user/2006-November/005070.html)This is my current code for creating a stream :
ff_data_stream = avformat_new_stream(ff_output_context, NULL);
ff_data_stream->codec->codec_type = AVMEDIA_TYPE_DATA;
ff_data_stream->codec->codec_id = AV_CODEC_ID_NONE;
ff_data_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;But then when I’m calling
avformat_write_header
it errors with the following output to console :[mp4 @ 0x7fff68000900] track 1 : could not find tag, codec not
currently supported in containerSo my questions are as follows :
- Is it possible to create a data stream with an mp4 container ? If not, are there any containers that do ?
- This might not be the right way to do this, but I have not yet come across any method of doing so.
- If so, how can I configure the stream correctly ? (Whether it’s for this container or another)
- Would one then use an AVPacket when writing to file ? And write it into the file by using
av_interleaved_write_frame
?
Thanks
- Is it possible to create a data stream with an mp4 container ? If not, are there any containers that do ?
-
Displaying YUV420 data using Opengles shader is too slow
28 novembre 2012, par user1278982I have a child thread called A to decode video using ffmpeg on iPhone 3GS, another thread called B to display yuv data, in thread B, I used glSubTexImage2D to upload Y U V textures, and then convert yuv data to RGB in shader, but the frame rate in the decode thread is only 15fps.Why ?
Update :
The frame size is 720 * 576.
I also found something interesting that if I didn't start the thread displaying the YUV data, the frame rate calculated in the decode thread is 22 fps,otherwise 15 fps.So I think that my displaying method must be inefficient.the code as below.I have a callback in the decode thread :
typedef struct _DVDVideoPicture
{
char *plane[4];
int iLineSize[4];
}DVDVideoPicture;
void YUVCallBack(void *pYUVData, void *pContext)
{
VideoView *view = (VideoView *)pContext;
[view.glView copyYUVData:(DVDVideoPicture *)pData];
[view calculateFrameRate];
}The copyYUVData method extract the y u v planes separately. The following is displaying thread method.