
Recherche avancée
Autres articles (107)
-
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs. -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (11110)
-
input stream error _read() is not implemented : ffmpeg-fluent
23 mai 2018, par Thomsheer AhamedI have buffer object. I want to pass that buffer as readable stream to ffpmeg.
var ffmpeg = require('fluent-ffmpeg');
var ffmpegPath = require("ffmpeg-binaries").ffmpegPath()
ffmpeg.setFfmpegPath(ffmpegPath);
var command = new ffmpeg();
command.input(my_buffer)
.videoCodec('libx264')
.size('520x?')
.aspect('4:3')
.inputFPS(8)
.outputFPS(30)
.output('new_video.mp4')
.on('start', onStrat)
.on('progress', onProgress)
.on('end', onEnd)
.on('error', onError)
.run();If I pass buffer right away I will get "Error : Invalid input
at FfmpegCommand.proto.mergeAdd.proto.addInput.proto.input"So I converted buffer into stream using
function bufferToStream(buffer) {
let Duplex = require('stream').Duplex;
let stream = new Duplex();
stream.push(new Buffer(buffer));
stream.push(null);
return stream;
}
var red = bufferToStream(finalResponse.file.buffer);and passed this steam to input
command.input(red)
Above Command throws "ffmpeg exited with code 1 : Error opening filters !" ;
If I use my_buffer and save it in locally using fs.writeFile(’path’,my_buffer) and pass this path to input of ffmpeg then it works fine..
But I dont want to store that file and then delete it after altering video.
Can some one help me ?
-
No frame decodes after upgrading from ffmpeg 1.1 to 3.3
13 juin 2018, par M.MahdipourI have a source code in C++ using libavcodec for decoding h264 rtsp stream frames. This source code was written using ffmpeg 1.1. Now when I upgraded to ffmpeg 3.3, all things seem to work correctly except that decoding frames not work.
In old version, I was using avcodec_decode_video2. After upgrading, avcodec_decode_video2 always sets got_picture to 0 and return value is equal to the size of the input packet (which means all data is used). And never a frame is decoded.
I have also removed avcodec_decode_video2 and done decoding with avcodec_send_packet and avcodec_receive_frame, but avcodec_send_packet always returns 0 and avcodec_receive_frame always returns -11 (EAGAIN).This is the code I use for decoding :
#include "stdafx.h"
#include
#include <string>
#include <iostream>
using namespace std;
extern "C"{
#include "libavformat/avformat.h"
#include "libavcodec/avcodec.h"
#include "libswscale/swscale.h"
#include "libavutil/pixfmt.h"
}
int extraDataSize;
static const int MaxExtraDataSize = 1024;
uint8_t extraDataBuffer[MaxExtraDataSize];
void AddExtraData(uint8_t* data, int size)
{
auto newSize = extraDataSize + size;
if (newSize > MaxExtraDataSize){
throw "extradata exceeds size limit";
}
memcpy(extraDataBuffer + extraDataSize, data, size);
extraDataSize = newSize;
}
int _tmain(int argc, _TCHAR* argv[])
{
std::string strFramesPath("g:\\frames\\");
AVCodec* avCodec;
AVCodecContext* avCodecContext;
AVFrame* avFrame;
AVCodecID codecId = AV_CODEC_ID_H264;
unsigned char sprops_part_1[9] = { 0x27, 0x42, 0x80, 0x1f, 0xda, 0x02, 0xd0, 0x49, 0x10 };
unsigned char sprops_part_2[4] = { 0x28, 0xce, 0x3c, 0x80 };
av_register_all();
avcodec_register_all();
avCodec = avcodec_find_decoder(codecId);
avCodecContext = avcodec_alloc_context3(avCodec);
if (!avCodecContext)
{
cout << "avcodec_alloc_context3 failed." << endl;
return 0;
}
uint8_t startCode[] = { 0x00, 0x00, 0x01 };
// sprops
{
// sprops 1
AddExtraData(startCode, sizeof(startCode));
AddExtraData(sprops_part_1, 9);
// sprops 2
AddExtraData(startCode, sizeof(startCode));
AddExtraData(sprops_part_2, 4);
avCodecContext->extradata = extraDataBuffer;
avCodecContext->extradata_size = extraDataSize;
}
AddExtraData(startCode, sizeof(startCode));
avCodecContext->flags = 0;
if (avcodec_open2(avCodecContext, avCodec, NULL) < 0)
{
cout << "failed to open codec" << endl;
return 0;
}
avFrame = av_frame_alloc();
if (!avFrame)
{
cout << "failed to alloc frame" << endl;
return 0;
}
void *buffer = malloc(100 * 1024); // 100 KB buffer - all frames fit in this buffer
for (int nFrameIndex = 0; nFrameIndex < 257; nFrameIndex++)
{
std::string strFilename = std::string("g:\\frames\\" + std::to_string(nFrameIndex));
FILE* f = fopen(strFilename.c_str(), "rb");
fseek(f, 0, SEEK_END);
long nFileSize = ftell(f);
fseek(f, 0, SEEK_SET);
size_t nReadSize = fread(buffer, 1, nFileSize, f);
// cout << strFilename << endl;
if (nReadSize != nFileSize)
{
cout << "Error reading file data" << endl;
continue;
}
AVPacket avpkt;
avpkt.data = (uint8_t*)buffer;
avpkt.size = nReadSize;
while (avpkt.size > 0)
{
int got_frame = 0;
auto len = avcodec_decode_video2(avCodecContext, avFrame, &got_frame, &avpkt);
if (len < 0) {
//TODO: log error
cout << "Error decoding - error code: " << len << endl;
break;
}
if (got_frame)
{
cout << "* Got 1 Decoded Frame" << endl;
}
avpkt.size -= len;
avpkt.data += len;
}
}
getchar();
return 0;
}
</iostream></string>Test frames data can be downloaded from this link :
Frames.zip ( 3.7MB)I have used windows builds from Builds - Zeranoe FFmpeg
If you copy paste this code into your IDE, the code compiles successfully. Using libavcodec new versions, no frame is decoded. Using old version of libavcodec (20141216-git-92a596f), decoding starts when feed frame 2.Any ideas ?
-
How to fetch both live video frame and timestamp from ffmpeg to python on Windows
8 mai 2018, par vijiboySearching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.
I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?
Expected : It should write video frames to disk as well as print timestamp details.
Actual : It writes video files but does not get the timestamp (showinfo) details.Here’s the code I tried :
import subprocess as sp
import numpy
import cv2
command = [ 'ffmpeg',
'-i', 'e:\sample.wmv',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-f', 'image2pipe', '-'] # we need to output to a pipe
pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???
for i in range(10):
raw_image = pipe.stdout.read(1280*720*3)
img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
print "showinfo output", img_info
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3))
# write video frame to file just to verify
videoFrameName = 'Video_Frame{0}.png'.format(i)
cv2.imwrite(videoFrameName,image2)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
pipe.stderr.flush()So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...