
Recherche avancée
Autres articles (86)
-
Submit enhancements and plugins
13 avril 2011If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone. -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (14067)
-
Difficulty getting the expected output for .avi conversions to .mp4 using FFMPEG [closed]
25 avril 2024, par EricelI am trying to write a video converting script that converts an input video file to .mp4 output. I am using ffmpeg libx264 encoder. Unfortunately, when I convert from .avi to .mp4, the output video is not smooth, a little skippy. Here is how I set up the encoder code :


AVCodecContext *Video::setupVideoEncoder(const AVCodec *encoder, AVStream *inVideoStream, AVFormatContext *outFormatCtx)
{
 AVCodecContext *codecCtx = avcodec_alloc_context3(encoder);
 if (!codecCtx)
 {
 std::cerr << "Failed to allocate the video codec context." << std::endl;
 return nullptr;
 }

 if (avcodec_parameters_to_context(codecCtx, inVideoStream->codecpar) < 0)
 {
 std::cerr << "Failed to copy codec parameters to encoder context." << std::endl;
 avcodec_free_context(&codecCtx);
 return nullptr;
 }

 // Correctly assign pixel format based on encoder support
 if (!check_pix_fmt(encoder, inVideoStream->codecpar->format))
 {
 codecCtx->pix_fmt = encoder->pix_fmts[0];
 }
 else
 {
 codecCtx->pix_fmt = static_cast<avpixelformat>(inVideoStream->codecpar->format);
 }
 codecCtx->width = inVideoStream->codecpar->width;
 codecCtx->height = inVideoStream->codecpar->height;
 codecCtx->bit_rate = 2000000; // 2 Mbps
 codecCtx->gop_size = 12;
 codecCtx->max_b_frames = 3;

 // Setting frame rate and time base using guessed values
 AVRational framerate = av_guess_frame_rate(outFormatCtx, inVideoStream, nullptr);
 codecCtx->framerate = framerate;
 codecCtx->time_base = av_inv_q(framerate);

 AVDictionary *opts = nullptr;
 av_dict_set(&opts, "x264-params", "keyint=25:min-keyint=25:no-scenecut=1", 0);

 if (outFormatCtx->oformat->flags & AVFMT_GLOBALHEADER)
 {
 codecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }

 if (avcodec_open2(codecCtx, encoder, &opts) < 0)
 {
 std::cerr << "Failed to open the video encoder." << std::endl;
 avcodec_free_context(&codecCtx);
 av_dict_free(&opts);
 return nullptr;
 }

 av_dict_free(&opts);
 return codecCtx;
} 
</avpixelformat>


I can only thing the configurations here are the problem, because if I converted a .mov to .mp4, I get the expected output.


-
Video from Android Camera muxed with libavformat not playable in all players and audio not synced
19 janvier 2021, par DebruggerI'm using
avformat
to mux encoded video and audio received from Android into an mp4 file. The resulting file is playable throughffplay
though it sometimes outputs "No Frame !" during playback. VLC kind of plays it back but with glitches that look like the effect when movement data for one video is combined with color data from another. The video player on my phone does not play it at all.

On top of that audio is not properly synced, even though MediaCodec manages to produce a proper file with nothing more than the code below has available (ie.
presentationTimeStamp
in microseconds.

This is my code (error checking omitted for clarity) :


// Initializing muxer
AVStream *videoStream = avformat_new_stream(outputContext, nullptr);
videoStreamIndex = videoStream->index;

videoStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
videoStream->codecpar->codec_id = AV_CODEC_ID_H264;
videoStream->codecpar->bit_rate = bitrate;
videoStream->codecpar->width = width;
videoStream->codecpar->height = height;
videoStream->time_base.num = 1;
videoStream->time_base.den = 90000;

AVStream* audioStream = avformat_new_stream(outputContext, nullptr);
audioStreamIndex = audioStream->index;
audioStream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
audioStream->codecpar->codec_id = AV_CODEC_ID_MP4ALS;
audioStream->codecpar->bit_rate = audiobitrate;
audioStream->codecpar->sample_rate = audiosampleRate;
audioStream->codecpar->channels = audioChannelCount;
audioStream->time_base.num = 1;
audioStream->time_base.den = 90000;

avformat_write_header(outputContext, &opts);

writtenAudio = writtenVideo = false;


// presentationTimeUs is the absolute timestamp when the encoded frame was received in Android code. 
// This is what is usually fed into MediaCodec
int writeVideoFrame(uint8_t *data, int size, int64_t presentationTimeUs) {
 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.flags |= AV_PKT_FLAG_KEY; // I know setting this on every frame is wrong. When do I set it?
 pkt.data = data;
 pkt.size = size;
 pkt.dts = AV_NOPTS_VALUE;
 pkt.pts = presentationTimeUs;
 if (writtenVideo) { // since the timestamp is absolute we have to subtract the initial offset
 pkt.pts -= firstVideoPts;
 }
 // rescale from microseconds to the stream timebase
 av_packet_rescale_ts(&pkt, AVRational { 1, 1000000 }, outputContext->streams[videoStreamIndex]->time_base);
 pkt.dts = AV_NOPTS_VALUE;
 pkt.stream_index = videoStreamIndex;
 if (!writtenVideo) {
 AVStream* videoStream = outputContext->streams[videoStreamIndex];
 videoStream->start_time = pkt.pts;
 firstVideoPts = presentationTimeUs;
 }
 if (av_interleaved_write_frame(outputContext, &pkt) < 0) {
 return 1;
 }
 writtenVideo = true;
 return 0;
}

int writeAudioFrame(uint8_t *data, int size, int64_t presentationTimeUs) {
 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = data;
 pkt.size = size;
 pkt.stream_index = audioStreamIndex;
 pkt.pts = presentationTimeUs;
 av_packet_rescale_ts(&pkt, AVRational { 1, 1000000}, outputContext->streams[audioStreamIndex]->time_base);
 pkt.flags |= AV_PKT_FLAG_KEY;
 pkt.dts = AV_NOPTS_VALUE;
 if (!writtenAudio) {
 outputContext->streams[audioStreamIndex]->start_time = pkt.pts;
 }
 if (av_interleaved_write_frame(outputContext, &pkt) < 0) {
 return 1;
 }
 writtenAudio = true;
 return 0;
}

void close() {
 av_write_trailer(outputContext);
 running = false;

 // cleanup AVFormatContexts etc
}



I think I'm doing the same as shown in
avformat
docs and examples, and the produced video is somewhat usable (reencoding it with ffmpeg yields a working video). But some things must still be wrong.

-
Fluent ffmpeg how to user save callback
7 août 2018, par Desert PI am using fluent-ffmpeg
GIT
I want to further process the saved file. But save do not have any callback. how can I use the saved file with promise.
My code isffmpeg(filename)
.toFormat('mp3')
.on('error', (err) => {
console.log('An error occurred: ' + err.message);
})
.on('progress', (progress) => {
console.log('Processing: ' + progress.targetSize + ' KB converted');
})
.on('end', () => {
console.log('Processing finished !');
})
.save(`./${newname}.mp3`)My problem is "save" function do not have a callback. so how could I save the output on S3 again ?