
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (17)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (4282)
-
feed raw yuv frame to ffmpeg with timestamp
30 août 2014, par hawk_with_windI’ve trying pipe audio and video raw data to ffmpeg and push realtime stream through RTSP protocol on android.
the command-line is look like this"ffmpeg -re -f image2pipe -vcodec mjpeg -i "+vpipepath
+ " -f s16le -acodec pcm_s16le -ar 8000 -ac 1 -i - "
+ " -vcodec libx264 "
+ " -preset slow -pix_fmt yuv420p -crf 30 -s 160x120 -r 6 -tune film "
+ " -g 6 -keyint_min 6 -bf 16 -b_strategy 1 "
+ " -acodec libopus -ac 1 -ar 48000 -b:a 80k -vbr on -frame_duration 20 "
+ " -compression_level 10 -application voip -packet_loss 20 "
+ " -f rtsp rtsp://remote-rtsp-server/live.sdp";I’m using libx264 for video codec and libopus for audio codec.
the yuv frames are feed through a named pipe created by mkfifo, the pcm frames are feed through stdin.It works, and I can fetch and play the stream by ffplay. But there is serverely audio/video sync issue. Audio is 5 10 seconds later than video.
I guess the problem is both yuv frame and pcm frame doesn’t have any timestamp on them. FFmpeg add timestamp when it feed with the data. but audio/video capture thread is impossible to run at the same rate.
Is there a way to add timestamp to each raw data frame ? (something like PST/DST ?)the way I used was from this thread :
Android Camera Capture using FFmpeg -
ffmpeg mono8/rgb24 images to yuv420p to x264 conversion
30 mai 2014, par 3xpothis is probably a trivial question but I’m going crazy with this (ffmpeg) framework.
Based on this post (sws_scale YUV —> RGB distorted image) (and much more searches) I’ve written the following code to take a char* pointer to a buffer image (Mono8 or RGB24) and I convert to YUV420P to encode in x264. I have to create a streaming of this images between two PC.This is the code :
bool compressImageX264(Frame *f, int size, char* image){
codec = avcodec_find_encoder(AV_CODEC_ID_H264);
if (!codec) {
std::cout << "Codec not found" << std::endl;
return false;
}
c = avcodec_alloc_context3(codec);
if (!c) {
std::cout << "Could not allocate video codec context" << std::endl;
return false;
}
av_opt_set(c->priv_data, "preset", "veryfast", 0);
av_opt_set(c->priv_data, "tune", "zerolatency", 0);
c->width = std::stoi(f->width);
c->height = std::stoi(f->height);
c->gop_size = 10;
c->max_b_frames = 1;
EDIT: c->pix_fmt = AV_PIX_FMT_YUV420P;
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
std::cout << "Could not open codec" << std::endl;
return false;
}
AVFrame *avFrameRGB = av_frame_alloc();
if (!avFrameRGB) {
std::cout << "Could not allocate video frame" << std::endl;
return false;
}
avFrameRGB->format = std::stoi(f->channels) > 1 ? AV_PIX_FMT_RGB24 : AV_PIX_FMT_GRAY8;
avFrameRGB->width = c->width;
avFrameRGB->height = c->height;
int ret = av_image_alloc(
avFrameRGB->data, avFrameRGB->linesize,
avFrameRGB->width, avFrameRGB->height, AVPixelFormat(avFrameRGB->format),
32);
if (ret < 0) {
std::cout << "Could not allocate raw picture buffer" << std::endl;
return false;
}
uint8_t *p = avFrameRGB->data[0];
for (int i = 0; i < size; i++){
*p++ = image[i];
}
AVFrame* avFrameYUV = av_frame_alloc();
avFrameYUV->format = AV_PIX_FMT_YUV420P;
avFrameYUV->width = c->width;
avFrameYUV->height = c->height;
ret = av_image_alloc(
avFrameYUV->data, avFrameYUV->linesize,
avFrameYUV->width, avFrameYUV->height, AVPixelFormat(avFrameYUV->format),
32);
SwsContext *img_convert_ctx = sws_getContext(c->width, c->height, AVPixelFormat(avFrameRGB->format),
c->width, c->height, AV_PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
ret = sws_scale(img_convert_ctx,
avFrameRGB->data, avFrameRGB->linesize,
0, c->height,
avFrameYUV->data, avFrameYUV->linesize);
sws_freeContext(img_convert_ctx);
AVPacket pkt;
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
avFrameYUV->pts = frameCount; frameCount++; //GLOBAL VARIABLE
int got_output;
ret = avcodec_encode_video2(c, &pkt, avFrameYUV, &got_output);
if (ret < 0) { //<-- Where the code broke
std::cout << "Error encoding frame" << std::endl;
return false;
}
if (got_output) {
std::cout << "Write frame " << frameCount - 1 << "(size = " << pkt.size << ")" << std::endl;
char* buffer = new char[pkt.size];
for (int i = 0; i < pkt.size; i++){
buffer[i] = pkt.data[i];
}
f->buffer = buffer;
f->size = std::to_string(pkt.size);
f->compression = std::string("x264");
av_free_packet(&pkt);
}
return true;
}I know that maybe this is a lot inefficient but for now I’m worried to make it work. I failed when I call avcodec_encode_video2. On console it’s printed this message : "Input picture width (640) is greater than stride (320)".
I think that conversion is the assassin. But I don’t fully know the parameters meanings
Thanks for all your help.EDIT :
Ok, I have founded the "first" error. Now the conversion work properly and avcodec_encode_video2 return 0. The problem now is that got_output is always equal zero and on the console nothing is available. -
How to limit the backward dependency between coded frames in ffmpeg/x264
26 mars 2015, par Bastian35022I am currently playing with ffmpeg + libx264, but i couldn’t find a way to limit the backward dependency between coded frames.
Let me explain what i mean : I want the coded frames to only contain references to at most, let’s say, 5 frames in the future. As a result, no frame has to "wait" for more than 5 frames to be coded (makes sense for low latency applications).
I am aware of the
-tune zerolatency
option, but that’s not what i want ; I still want bidirectional prediction.