
Recherche avancée
Autres articles (57)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (6709)
-
Encoding IPIPIP Frames from x264 baseline 420 profile [duplicate]
5 août 2015, par Codec GuyThis question already has an answer here :
I am new to FFMPEG. I am trying encode a stream in alternate I-Frames and P-Frames for the baseline profile IPIPIPIPIP.
I searched FFMPEG forums but was not able to get the required output
My script file :
export LD_LIBRARY_PATH=:./FFMPEGEncLibs
./ffmpegEnc -f rawvideo -r 25 -s 176x144 -vcodec rawvideo -i ./encIn/akiyo_qcif.yuv -c:v libx264 -x264-params cabac=0:8x8dct=0 -pix_fmt yuv420p -profile:v baseline -level 4.1 -psnr -intra -qp 9 -vframes 10 ./encOut/akiyo_cif.h264Can someone suggest me changes to my script file, so that i would be able to encode in stream in IPIPIP format
Thanks in advance
-
Anomalie #3556 : Message "Le statut a déjà été modifié" sur actualisation de la page après avoir m...
28 septembre 2018Re,
Alors, on a regardé ça avec Marcimat sur MacOS, et ni FireFox, ni Chrome n’ont eu le comportement de rejouer le submit du formulaire au rechargement de la session après avoir quitté le navigateur.
Je viens de vérifier avec Chrome et Firefox (dev edition) sur Win 10, et pareil, par de submit rejoué.
On va dire que le F5 me gêne quand même sur le principe, mais la question du submit rejoué à la restauration de session étant résolu côté navigateur, je déclare forfait ;-)
-
Segmentation fault with avcodec_encode_video2() while encoding H.264
16 juillet 2015, par Baris DemirayI’m trying to convert a
cv::Mat
to anAVFrame
to encode it then in H.264 and wanted to start from a simple example, as I’m a newbie in both. So I first read in a JPEG file, and then do the pixel format conversion withsws_scale()
fromAV_PIX_FMT_BGR24
toAV_PIX_FMT_YUV420P
keeping the dimensions the same, and it all goes fine until I callavcodec_encode_video2()
.I read quite a few discussions regarding an
AVFrame
allocation and the question segmetation fault while avcodec_encode_video2 seemed like a match but I just can’t see what I’m missing or getting wrong.Here is the minimal code that you can reproduce the crash, it should be compiled with,
g++ -o OpenCV2FFmpeg OpenCV2FFmpeg.cpp -lopencv_imgproc -lopencv_highgui -lopencv_core -lswscale -lavutil -lavcodec -lavformat
It’s output on my system,
cv::Mat [width=420, height=315, depth=0, channels=3, step=1260]
I'll soon crash..
Segmentation faultAnd that
sample.jpg
file’s details byidentify
tool,~temporary/sample.jpg JPEG 420x315 420x315+0+0 8-bit sRGB 38.3KB 0.000u 0:00.000
Please note that I’m trying to create a video out of a single image, just to keep things simple.
#include <iostream>
#include <cassert>
using namespace std;
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libswscale></libswscale>swscale.h>
#include <libavformat></libavformat>avformat.h>
}
#include <opencv2></opencv2>core/core.hpp>
#include <opencv2></opencv2>highgui/highgui.hpp>
const string TEST_IMAGE = "/home/baris/temporary/sample.jpg";
int main(int /*argc*/, char** argv)
{
av_register_all();
avcodec_register_all();
/**
* Initialise the encoder
*/
AVCodec *h264encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
AVFormatContext *cv2avFormatContext = avformat_alloc_context();
/**
* Create a stream and allocate frames
*/
AVStream *h264outputstream = avformat_new_stream(cv2avFormatContext, h264encoder);
avcodec_get_context_defaults3(h264outputstream->codec, h264encoder);
AVFrame *sourceAvFrame = av_frame_alloc(), *destAvFrame = av_frame_alloc();
int got_frame;
/**
* Pixel formats for the input and the output
*/
AVPixelFormat sourcePixelFormat = AV_PIX_FMT_BGR24;
AVPixelFormat destPixelFormat = AV_PIX_FMT_YUV420P;
/**
* Create cv::Mat
*/
cv::Mat cvFrame = cv::imread(TEST_IMAGE, CV_LOAD_IMAGE_COLOR);
int width = cvFrame.size().width, height = cvFrame.size().height;
cerr << "cv::Mat [width=" << width << ", height=" << height << ", depth=" << cvFrame.depth() << ", channels=" << cvFrame.channels() << ", step=" << cvFrame.step << "]" << endl;
h264outputstream->codec->pix_fmt = destPixelFormat;
h264outputstream->codec->width = cvFrame.cols;
h264outputstream->codec->height = cvFrame.rows;
/**
* Prepare the conversion context
*/
SwsContext *bgr2yuvcontext = sws_getContext(width, height,
sourcePixelFormat,
h264outputstream->codec->width, h264outputstream->codec->height,
h264outputstream->codec->pix_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
/**
* Convert and encode frames
*/
for (uint i=0; i < 250; i++)
{
/**
* Allocate source frame, i.e. input to sws_scale()
*/
avpicture_alloc((AVPicture*)sourceAvFrame, sourcePixelFormat, width, height);
for (int h = 0; h < height; h++)
memcpy(&(sourceAvFrame->data[0][h*sourceAvFrame->linesize[0]]), &(cvFrame.data[h*cvFrame.step]), width*3);
/**
* Allocate destination frame, i.e. output from sws_scale()
*/
avpicture_alloc((AVPicture *)destAvFrame, destPixelFormat, width, height);
sws_scale(bgr2yuvcontext, sourceAvFrame->data, sourceAvFrame->linesize,
0, height, destAvFrame->data, destAvFrame->linesize);
/**
* Prepare an AVPacket for encoded output
*/
AVPacket avEncodedPacket;
av_init_packet(&avEncodedPacket);
avEncodedPacket.data = NULL;
avEncodedPacket.size = 0;
// av_free_packet(&avEncodedPacket); w/ or w/o result doesn't change
cerr << "I'll soon crash.." << endl;
if (avcodec_encode_video2(h264outputstream->codec, &avEncodedPacket, destAvFrame, &got_frame) < 0)
exit(1);
cerr << "Checking if we have a frame" << endl;
if (got_frame)
av_write_frame(cv2avFormatContext, &avEncodedPacket);
av_free_packet(&avEncodedPacket);
av_frame_free(&sourceAvFrame);
av_frame_free(&destAvFrame);
}
}
</cassert></iostream>Thanks in advance !
EDIT : And the stack trace after the crash,
Thread 2 (Thread 0x7fffe5506700 (LWP 10005)):
#0 0x00007ffff4bf6c5d in poll () at /lib64/libc.so.6
#1 0x00007fffe9073268 in () at /usr/lib64/libusb-1.0.so.0
#2 0x00007ffff47010a4 in start_thread () at /lib64/libpthread.so.0
#3 0x00007ffff4bff08d in clone () at /lib64/libc.so.6
Thread 1 (Thread 0x7ffff7f869c0 (LWP 10001)):
#0 0x00007ffff5ecc7dc in avcodec_encode_video2 () at /usr/lib64/libavcodec.so.56
#1 0x00000000004019b6 in main(int, char**) (argv=0x7fffffffd3d8) at ../src/OpenCV2FFmpeg.cpp:99EDIT2 : Problem was that I hadn’t
avcodec_open2()
the codec as spotted by Ronald. Final version of the code is at https://github.com/barisdemiray/opencv2ffmpeg/, with leaks and probably other problems hoping that I’ll improve it while learning both libraries.