
Recherche avancée
Médias (1)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
Autres articles (83)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (11641)
-
How to encode using the FFMpeg in Android (using H263)
10 mai 2019, par Kenny910I am trying to follow the sample code on encoding in the ffmpeg document and successfully build a application to encode and generate a mp4 file but I face the following problems :
1) I am using the H263 for encoding but I can only set the width and height of the AVCodecContext to 176x144, for other case (like 720x480 or 640x480) it will return fail.
2) I can’t play the output mp4 file by using the default Android player, isn’t it support H263 mp4 file ? p.s. I can play it by using other player
3) Is there any sample code on encoding other video frame to make a new video (which mean decode the video and encode it back in different quality setting, also i would like to modify the frame content) ?
Here is my code, thanks !
JNIEXPORT jint JNICALL Java_com_ffmpeg_encoder_FFEncoder_nativeEncoder(JNIEnv* env, jobject thiz, jstring filename){
LOGI("nativeEncoder()");
avcodec_register_all();
avcodec_init();
av_register_all();
AVCodec *codec;
AVCodecContext *codecCtx;
int i;
int out_size;
int size;
int x;
int y;
int output_buffer_size;
FILE *file;
AVFrame *picture;
uint8_t *output_buffer;
uint8_t *picture_buffer;
/* Manual Variables */
int l;
int fps = 30;
int videoLength = 5;
/* find the H263 video encoder */
codec = avcodec_find_encoder(CODEC_ID_H263);
if (!codec) {
LOGI("avcodec_find_encoder() run fail.");
}
codecCtx = avcodec_alloc_context();
picture = avcodec_alloc_frame();
/* put sample parameters */
codecCtx->bit_rate = 400000;
/* resolution must be a multiple of two */
codecCtx->width = 176;
codecCtx->height = 144;
/* frames per second */
codecCtx->time_base = (AVRational){1,fps};
codecCtx->pix_fmt = PIX_FMT_YUV420P;
codecCtx->codec_id = CODEC_ID_H263;
codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
/* open it */
if (avcodec_open(codecCtx, codec) < 0) {
LOGI("avcodec_open() run fail.");
}
const char* mfileName = (*env)->GetStringUTFChars(env, filename, 0);
file = fopen(mfileName, "wb");
if (!file) {
LOGI("fopen() run fail.");
}
(*env)->ReleaseStringUTFChars(env, filename, mfileName);
/* alloc image and output buffer */
output_buffer_size = 100000;
output_buffer = malloc(output_buffer_size);
size = codecCtx->width * codecCtx->height;
picture_buffer = malloc((size * 3) / 2); /* size for YUV 420 */
picture->data[0] = picture_buffer;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size / 4;
picture->linesize[0] = codecCtx->width;
picture->linesize[1] = codecCtx->width / 2;
picture->linesize[2] = codecCtx->width / 2;
for(l=0;l/encode 1 second of video
for(i=0;i/prepare a dummy image YCbCr
//Y
for(y=0;yheight;y++) {
for(x=0;xwidth;x++) {
picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
}
}
//Cb and Cr
for(y=0;yheight/2;y++) {
for(x=0;xwidth/2;x++) {
picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
}
}
//encode the image
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, picture);
fwrite(output_buffer, 1, out_size, file);
}
//get the delayed frames
for(; out_size; i++) {
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, NULL);
fwrite(output_buffer, 1, out_size, file);
}
}
//add sequence end code to have a real mpeg file
output_buffer[0] = 0x00;
output_buffer[1] = 0x00;
output_buffer[2] = 0x01;
output_buffer[3] = 0xb7;
fwrite(output_buffer, 1, 4, file);
fclose(file);
free(picture_buffer);
free(output_buffer);
avcodec_close(codecCtx);
av_free(codecCtx);
av_free(picture);
LOGI("finish");
return 0; } -
Creating a usable H.264 video file
4 mai 2019, par Ethan McTagueI am trying to use
libavcodec
to generate an mp4 video file from individual frames. Each input frame is a qtQImage
, and the output file is written to using the QtQFile
class.I’ve done this through a
VideoTarget
class which opens the given ’target’ file when initialized, records frames whenaddFrame(image)
is called, and then saves/closes the file when its destructor is called.The class has the following fields :
AVCodec* m_codec = nullptr;
AVCodecContext *m_context = nullptr;
AVPacket* m_packet = nullptr;
AVFrame* m_frame = nullptr;
QFile m_target;And looks like this :
VideoTarget::VideoTarget(QString target, QObject *parent) : QObject(parent), m_target(target)
{
// Find video codec
m_codec = avcodec_find_encoder_by_name("libx264rgb");
if (!m_codec) throw std::runtime_error("Unable to find codec.");
// Make codec context
m_context = avcodec_alloc_context3(m_codec);
if (!m_context) throw std::runtime_error("Unable to allocate codec context.");
// Make codec packet
m_packet = av_packet_alloc();
if (!m_packet) throw std::runtime_error("Unable to allocate packet.");
// Configure context
m_context->bit_rate = 400000;
m_context->width = 1280;
m_context->height = 720;
m_context->time_base = (AVRational){1, 60};
m_context->framerate = (AVRational){60, 1};
m_context->gop_size = 10;
m_context->max_b_frames = 1;
m_context->pix_fmt = AV_PIX_FMT_RGB24;
if (m_codec->id == AV_CODEC_ID_H264)
av_opt_set(m_context->priv_data, "preset", "slow", 0);
// Open Codec
int ret = avcodec_open2(m_context, m_codec, nullptr);
if (ret < 0) {
throw std::runtime_error("Unable to open codec.");
}
// Open file
if (!m_target.open(QIODevice::WriteOnly))
throw std::runtime_error("Unable to open target file.");
// Allocate frame
m_frame = av_frame_alloc();
if (!m_frame) throw std::runtime_error("Unable to allocate frame.");
m_frame->format = m_context->pix_fmt;
m_frame->width = m_context->width;
m_frame->height = m_context->height;
m_frame->pts = 0;
ret = av_frame_get_buffer(m_frame, 24);
if (ret < 0) throw std::runtime_error("Unable to allocate frame buffer.");
}
void VideoTarget::addFrame(QImage &image)
{
// Ensure frame data is writable
int ret = av_frame_make_writable(m_frame);
if (ret < 0) throw std::runtime_error("Unable to make frame writable.");
// Prepare image
for (int y = 0; y < m_context->height; y++) {
for (int x = 0; x < m_context->width; x++) {
auto pixel = image.pixelColor(x, y);
int pos = (y * 1024 + x) * 3;
m_frame->data[0][pos] = pixel.red();
m_frame->data[0][pos + 1] = pixel.green();
m_frame->data[0][pos + 2] = pixel.blue();
}
}
m_frame->pts++;
// Send the frame
ret = avcodec_send_frame(m_context, m_frame);
if (ret < 0) throw std::runtime_error("Unable to send AV frame.");
while (ret >= 0) {
ret = avcodec_receive_packet(m_context, m_packet);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
return;
else if (ret < 0) throw std::runtime_error("Error during encoding.");
m_target.write((const char*)m_packet->data, m_packet->size);
av_packet_unref(m_packet);
}
}
VideoTarget::~VideoTarget()
{
int ret = avcodec_send_frame(m_context, nullptr);
if (ret < 0) throw std::runtime_error("Unable to send AV null frame.");
while (ret >= 0) {
ret = avcodec_receive_packet(m_context, m_packet);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
return;
else if (ret < 0) throw std::runtime_error("Error during encoding.");
m_target.write((const char*)m_packet->data, m_packet->size);
av_packet_unref(m_packet);
}
// Magic number at the end of the file
uint8_t endcode[] = { 0, 0, 1, 0xb7 };
m_target.write((const char*)endcode, sizeof(endcode));
m_target.close();
// Free codec stuff
avcodec_free_context(&m_context);
av_frame_free(&m_frame);
av_packet_free(&m_packet);
}When used, the class seems to work, and data is written to the file, except I am unable to play back the resulting file in any application.
My main suspect is these lines :
// Prepare image
for (int y = 0; y < m_context->height; y++) {
for (int x = 0; x < m_context->width; x++) {
auto pixel = image.pixelColor(x, y);
int pos = (y * 1024 + x) * 3;
m_frame->data[0][pos] = pixel.red();
m_frame->data[0][pos + 1] = pixel.green();
m_frame->data[0][pos + 2] = pixel.blue();
}
}The
libavcodec
documentation was extremely vague regarding the layout of image data, so I effectively had to guess and be happy with the first thing that didn’t crash, so chances are I’m writing this incorrectly. There’s also the issue of size mismatch between mypixel
color data calls (givingint
values) and the 24-bits-per-pixel RGB format I have selected.How do I tweak this code to output actual, functioning video files ?
-
Command Line 'hls_segment_size' creates TS files with segment failures
14 octobre 2019, par PhantomI am converting a simple mp4 video to hls but I need the segments to be in approximately a specific size
I researched and found :
-hls_segment_size 17000000
17000000 bytes( 17MB)
This creates TS files with approximate sizes, (does not have to be exact size)
ffmpeg.exe -i "in.mp4" -vcodec copy -acodec aac -hls_list_size 0 -hls_segment_size 17000000 -f hls "out.m3u8"
In the m3u8 file is created ’#EXT-X-BYTERANGE’, which is the way I want it
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:8.400000,
#EXT-X-BYTERANGE:1662108@0
SampleVideo_1280x720_30mb0.ts
#EXTINF:4.560000,
#EXT-X-BYTERANGE:383896@0
SampleVideo_1280x720_30mb1.ts
#EXTINF:3.120000,
#EXT-X-BYTERANGE:408712@383896
SampleVideo_1280x720_30mb1.ts
#EXTINF:5.640000,
#EXT-X-BYTERANGE:1161840@0
SampleVideo_1280x720_30mb2.ts
#EXTINF:1.880000,
#EXT-X-BYTERANGE:230864@0
SampleVideo_1280x720_30mb3.ts
#EXTINF:2.160000,
#EXT-X-BYTERANGE:330880@230864
SampleVideo_1280x720_30mb3.ts
#EXTINF:2.080000,
#EXT-X-BYTERANGE:489928@0
SampleVideo_1280x720_30mb4.ts
#EXTINF:4.400000,
#EXT-X-BYTERANGE:1564348@489928
SampleVideo_1280x720_30mb4.ts
...It seems alright, but it has a little problem. I’m testing on a player in the browser, and when the seconds goes from one segment to the other the video has a lock in sound and video. Something very annoying, not natural in the video.
Not using ’-hls_segment_size’ will have functional TS files, and without BYTERANGE in the m3u8 file
However, the size of the TS file will be according to the seconds defined
I am currently trying to get a ts file that is close to a size set between 15MB and 20MB, and have BYTERANGE in the m3u8 file.
Does anyone have any ideas ?
here’s the problem I’m trying to describe :
http://phantsc.rf.gd/AAA/Bbb.html
exactly in the second 7 of the video a ’locking’ happens, this happens when going from one segment to another