
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (53)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (10935)
-
retriving video details with ffprobe in php
15 juillet 2014, par Roohbakhsh Masoudi’m using this command for retrieve video information in ubuntu terminal :
ffprobe -show_streams sample.mp4
this command show all information about video in terminal, but when i use this command in php and use exec function to retrieve result , return
string(0) ""
{}how i can retrieve result in php ?
my code :
public function information($ffmpeg,$file)
{
$info = exec("$ffmpeg -show_streams $file");
return $info;
} -
FFMPEG Drag and Drop sh file
29 avril 2021, par WFCDefiOn Windows I have a simple batch file which I drop video files onto to convert to webms, it saves a lot of time as I prefer to just use the same configuration and don't care much about the names.


@echo off 
echo. 
ffmpeg -i %1 -c:v libvpx-vp9 -quality good -cpu-used 2 -b:v 5000k -qmin 15 -qmax 45 -maxrate 500k -bufsize 1500k -framerate 60 -threads 8 -vf scale=-1:1080 -c:a libvorbis -b:a 192k -f webm %1.webm 
pause



I know the .bat file won't really work in Linux (I'm on pop os so pretty much Ubuntu) so with the other lines removed and the
%1
changed to$1
it works. It won't do anything if I try dragging and dropping a video file onto it though.

I can type sudo then drag and drop the .sh file followed by a video into a terminal and press enter and it will have the same effect as dragging a video file onto a bat file in Windows.


Is there a way to recreate dragging and dropping a file directly onto another and it executing in Linux or is the terminal the only way ?


-
Memory leak when using ffmpeg
21 janvier 2017, par se210I have implemented a class which spawns a thread for reading and queuing frames, and the main thread displays these frames via OpenGL. I try to free the allocated memory after binding the image data to a OpenGL texture, but it seems some memory is not freed properly. The memory usage keeps growing until the system runs out of memory and eventually the frame reader thread cannot grab new frames due to memory allocation failure. Would someone please help me on what I might have missed ? Thank you.
This is the code for the frame reader thread :
void AVIReader::frameReaderThreadFunc()
{
AVPacket packet;
while (readFrames) {
// Allocate necessary memory
AVFrame* pFrame = av_frame_alloc();
if (pFrame == nullptr)
{
continue;
}
AVFrame* pFrameRGB = av_frame_alloc();
if (pFrameRGB == nullptr)
{
av_frame_free(&pFrame);
continue;
}
// Determine required buffer size and allocate buffer
int numBytes = avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
uint8_t* buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
if (buffer == nullptr)
{
av_frame_free(&pFrame);
av_frame_free(&pFrameRGB);
continue;
}
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
if (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
int frameFinished;
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if (frameFinished) {
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
VideoFrame vf;
vf.frame = pFrameRGB;
vf.pts = av_frame_get_best_effort_timestamp(pFrame) * time_base;
frameQueue.enqueue(vf);
av_frame_unref(pFrame);
av_frame_free(&pFrame);
}
}
//av_packet_unref(&packet);
av_free_packet(&packet);
}
}
}This is the code that grabs the queued frames and binds it to an OpenGL texture. I explicitly save the previous frame until I switch it out with the next frame. Otherwise, it seems to cause a segfault.
void AVIReader::GrabAVIFrame()
{
if (curFrame.pts >= clock_pts)
{
return;
}
if (frameQueue.empty())
return;
// Get a packet from the queue
VideoFrame videoFrame = frameQueue.top();
while (!frameQueue.empty() && frameQueue.top().pts < clock_pts)
{
videoFrame = frameQueue.dequeue();
}
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, videoFrame.frame->data[0]);
// release previous frame
if (curFrame.frame)
{
av_free(curFrame.frame->data[0]);
}
av_frame_unref(curFrame.frame);
// set current frame to new frame
curFrame = videoFrame;
}The frameQueue is a thread-safe priority queue that holds VideoFrame defined as :
class VideoFrame {
public:
AVFrame* frame;
double pts;
};Update : There was a silly error in the ordering of setting current frame to new frame. I forgot to switch it back after trying some things out. I also incorporated @ivan_onys’s suggestion, but that does not seem to fix the problem.
Update 2 : I adopted @Al Bundy’s suggestion to release pFrame and packet unconditionally, but the issue still persists.
Since buffer is what contains the actual image data which needs to be used in glTexSubImage2D(), I cannot release it until I am done displaying it on the screen (otherwise I get a segfault). avpicture_fill() assigns frame->data[0] = buffer, so I think calling av_free(curFrame.frame->data[0]) ; on the previous frame after texture mapping the new frame should release the allocated buffer.
Here is the updated frame reader thread code :
void AVIReader::frameReaderThreadFunc()
{
AVPacket packet;
while (readFrames) {
// Allocate necessary memory
AVFrame* pFrame = av_frame_alloc();
if (pFrame == nullptr)
{
continue;
}
AVFrame* pFrameRGB = av_frame_alloc();
if (pFrameRGB == nullptr)
{
av_frame_free(&pFrame);
continue;
}
// Determine required buffer size and allocate buffer
int numBytes = avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
uint8_t* buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
if (buffer == nullptr)
{
av_frame_free(&pFrame);
av_frame_free(&pFrameRGB);
continue;
}
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
if (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
int frameFinished;
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if (frameFinished) {
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
VideoFrame vf;
vf.frame = pFrameRGB;
vf.pts = av_frame_get_best_effort_timestamp(pFrame) * time_base;
frameQueue.enqueue(vf);
}
}
}
av_frame_unref(pFrame);
av_frame_free(&pFrame);
av_packet_unref(&packet);
av_free_packet(&packet);
}
}
Solved : It turned out the leaks were happening when the packet was from a non-video stream (e.g. audio). I also needed to free resources on frames that are skipped in the while-loop of GrabAVIFrame().