Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (84)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (12648)

  • FFMpeg + Beanstalk : How to pass the processes to it or achieve the same result without using Beanstalk

    6 avril 2017, par Ilia Rostovtsev

    My problem is that FFMpeg and Mencoder are extremely resourceful and running even one process of it makes HTTPd slow down but multiple processes of (FFMPEG /Mencoder) just hang it (HTTPd) completely. I would like my conversions to be processed with Beanstalk, for example.

    My concrete question is : How to transfer my current jobs to Beanstalk ?

    I have a simple PHP code that triggers conversion :

    RunInBackground('convert.php', array($upload, $video_id), $log_path);

    Now what would Beanstalk correct code would look like so these processes would NOT start all at the same time if multiple videos are uploaded ?

    If you believe that for my needs is better to use something else but Beanstalk and you know how to implement it, I would be still happy to see it !

    Thanks in advance,
    Ilia

  • How to change metadata with ffmpeg/avconv without creating a new file ?

    15 avril 2014, par tampis

    I am writing a python script for producing audio and video podcasts. There are a bunch of recorded media files (audio and video) and text files containing the meta information.

    Now I want to program a function which shall add the information from the meta data text files to all media files (the original and the converted ones). Because I have to handle many different file formats (wav, flac, mp3, mp4, ogg, ogv...) it would be great to have a tool which add meta data to arbitrary formats.

    My Question :

    How can I change the metadata of a file with ffmpeg/avconv without changing the audio or video of it and without creating a new file ? Is there another commandline/python tool which would do the job for me ?

    What I tried so far :

    I thought ffmpeg/avconv could be such a tool, because it can handle nearly all media formats. I hoped, that if I set -i input_file and the output_file to the same file, ffmpeg/avconv will be smart enough to leave the file unchanged. Then I could set -metadata key=value and just the metadata will be changed.

    But I noticed, that if I type avconv -i test.mp3 -metadata title='Test title' test.mp3 the audio test.mp3 will be reconverted in another bitrate.

    So I thought to use -c copy to copy all video and audio information. Unfortunately also this does not work :

    :~$ du -h test.wav # test.wav is 303 MB big
    303M    test.wav

    :~$ avconv -i test.wav -c copy -metadata title='Test title' test.wav
    avconv version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the
    Libav    developers
    built on Jun 12 2012 16:37:58 with gcc 4.6.3
    [wav @ 0x846b260] max_analyze_duration reached
    Input #0, wav, from 'test.wav':
    Duration: 00:29:58.74, bitrate: 1411 kb/s
       Stream #0.0: Audio: pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
    File 'test.wav' already exists. Overwrite ? [y/N] y
    Output #0, wav, to 'test.wav':
    Metadata:
       title           : Test title
       encoder         : Lavf53.21.0
       Stream #0.0: Audio: pcm_s16le, 44100 Hz, 2 channels, 1411 kb/s
    Stream mapping:
    Stream #0:0 -> #0:0 (copy)
    Press ctrl-c to stop encoding
    size=     896kB time=5.20 bitrate=1411.3kbits/s    
    video:0kB audio:896kB global headers:0kB muxing overhead 0.005014%

    :~$ du -h test.wav # file size of test.wav changed dramatically
    900K    test.wav

    You see, that I cannot use -c copy if input_file and output_file are the same. Of course I could produce a temporarily file :

    :-$ avconv -i test.wav -c copy -metadata title='Test title' test_temp.mp3
    :-$ mv test_tmp.mp3 test.mp3

    But this solution would create (temporarily) a new file on the filesystem and is therefore not preferable.

  • How to encode resampled PCM-audio to AAC using ffmpeg-API when input pcm samples count not equal 1024

    26 septembre 2016, par Aleksei2414904

    I am working on capturing and streaming audio to RTMP server at a moment. I work under MacOS (in Xcode), so for capturing audio sample-buffer I use AVFoundation-framework. But for encoding and streaming I need to use ffmpeg-API and libfaac encoder. So output format must be AAC (for supporting stream playback on iOS-devices).

    And I faced with such problem : audio-capturing device (in my case logitech camera) gives me sample-buffer with 512 LPCM samples, and I can select input sample-rate from 16000, 24000, 36000 or 48000 Hz. When I give these 512 samples to AAC-encoder (configured for appropriate sample-rate), I hear a slow and jerking audio (seems as like pice of silence after each frame).

    I figured out (maybe I am wrong), that libfaac encoder accepts audio frames only with 1024 samples. When I set input samplerate to 24000 and resample input sample-buffer to 48000 before encoding, I obtain 1024 resampled samples. After encoding these 1024 sampels to AAC, I hear proper sound on output. But my web-cam produce 512 samples in buffer for any input samplerate, when output sample-rate must be 48000 Hz. So I need to do resampling in any case, and I will not obtain exactly 1024 samples in buffer after resampling.

    Is there a way to solve this problem within ffmpeg-API functionality ?

    I would be grateful for any help.

    PS :
    I guess that I can accumulate resampled buffers until count of samples become 1024, and then encode it, but this is stream so there will be troubles with resulting timestamps and with other input devices, and such solution is not suitable.

    The current issue came out of the problem described in [question] : How to fill audio AVFrame (ffmpeg) with the data obtained from CMSampleBufferRef (AVFoundation) ?

    Here is a code with audio-codec configs (there also was video stream but video work fine) :

       /*global variables*/
       static AVFrame *aframe;
       static AVFrame *frame;
       AVOutputFormat *fmt;
       AVFormatContext *oc;
       AVStream *audio_st, *video_st;
    Init ()
    {
       AVCodec *audio_codec, *video_codec;
       int ret;

       avcodec_register_all();  
       av_register_all();
       avformat_network_init();
       avformat_alloc_output_context2(&oc, NULL, "flv", filename);
       fmt = oc->oformat;
       oc->oformat->video_codec = AV_CODEC_ID_H264;
       oc->oformat->audio_codec = AV_CODEC_ID_AAC;
       video_st = NULL;
       audio_st = NULL;
       if (fmt->video_codec != AV_CODEC_ID_NONE)
         { //…  /*init video codec*/}
       if (fmt->audio_codec != AV_CODEC_ID_NONE) {
       audio_codec= avcodec_find_encoder(fmt->audio_codec);

       if (!(audio_codec)) {
           fprintf(stderr, "Could not find encoder for '%s'\n",
                   avcodec_get_name(fmt->audio_codec));
           exit(1);
       }
       audio_st= avformat_new_stream(oc, audio_codec);
       if (!audio_st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }
       audio_st->id = oc->nb_streams-1;

       //AAC:
       audio_st->codec->sample_fmt  = AV_SAMPLE_FMT_S16;
       audio_st->codec->bit_rate    = 32000;
       audio_st->codec->sample_rate = 48000;
       audio_st->codec->profile=FF_PROFILE_AAC_LOW;
       audio_st->time_base = (AVRational){1, audio_st->codec->sample_rate };
       audio_st->codec->channels    = 1;
       audio_st->codec->channel_layout = AV_CH_LAYOUT_MONO;      


       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           audio_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       if (video_st)
       {
       //   …
       /*prepare video*/
       }
       if (audio_st)
       {
       aframe = avcodec_alloc_frame();
       if (!aframe) {
           fprintf(stderr, "Could not allocate audio frame\n");
           exit(1);
       }
       AVCodecContext *c;
       int ret;

       c = audio_st->codec;


       ret = avcodec_open2(c, audio_codec, 0);
       if (ret < 0) {
           fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
           exit(1);
       }

       //…
    }

    And resampling and encoding audio :

    if (mType == kCMMediaType_Audio)
    {
       CMSampleTimingInfo timing_info;
       CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &timing_info);
       double  pts=0;
       double  dts=0;
       AVCodecContext *c;
       AVPacket pkt = { 0 }; // data and size must be 0;
       int got_packet, ret;
        av_init_packet(&pkt);
       c = audio_st->codec;
         CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);

       NSUInteger channelIndex = 0;

       CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
       size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
       size_t lengthAtOffset = 0;
       size_t totalLength = 0;
       SInt16 *samples = NULL;
       CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));

       const AudioStreamBasicDescription *audioDescription = CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer));

       SwrContext *swr = swr_alloc();

       int in_smprt = (int)audioDescription->mSampleRate;
       av_opt_set_int(swr, "in_channel_layout",  AV_CH_LAYOUT_MONO, 0);

       av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);

       av_opt_set_int(swr, "in_channel_count", audioDescription->mChannelsPerFrame,  0);
       av_opt_set_int(swr, "out_channel_count", audio_st->codec->channels,  0);

       av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);
       av_opt_set_int(swr, "in_sample_rate",     audioDescription->mSampleRate,0);

       av_opt_set_int(swr, "out_sample_rate",    audio_st->codec->sample_rate,0);

       av_opt_set_sample_fmt(swr, "in_sample_fmt",  AV_SAMPLE_FMT_S16, 0);

       av_opt_set_sample_fmt(swr, "out_sample_fmt", audio_st->codec->sample_fmt,  0);

       swr_init(swr);
       uint8_t **input = NULL;
       int src_linesize;
       int in_samples = (int)numSamples;
       ret = av_samples_alloc_array_and_samples(&input, &src_linesize, audioDescription->mChannelsPerFrame,
                                                in_samples, AV_SAMPLE_FMT_S16P, 0);


       *input=(uint8_t*)samples;
       uint8_t *output=NULL;


       int out_samples = av_rescale_rnd(swr_get_delay(swr, in_smprt) +in_samples, (int)audio_st->codec->sample_rate, in_smprt, AV_ROUND_UP);

       av_samples_alloc(&output, NULL, audio_st->codec->channels, out_samples, audio_st->codec->sample_fmt, 0);
       in_samples = (int)numSamples;
       out_samples = swr_convert(swr, &output, out_samples, (const uint8_t **)input, in_samples);


       aframe->nb_samples =(int) out_samples;


       ret = avcodec_fill_audio_frame(aframe, audio_st->codec->channels, audio_st->codec->sample_fmt,
                                (uint8_t *)output,
                                (int) out_samples *
                                av_get_bytes_per_sample(audio_st->codec->sample_fmt) *
                                audio_st->codec->channels, 1);

       aframe->channel_layout = audio_st->codec->channel_layout;
       aframe->channels=audio_st->codec->channels;
       aframe->sample_rate= audio_st->codec->sample_rate;

       if (timing_info.presentationTimeStamp.timescale!=0)
           pts=(double) timing_info.presentationTimeStamp.value/timing_info.presentationTimeStamp.timescale;

       aframe->pts=pts*audio_st->time_base.den;
       aframe->pts = av_rescale_q(aframe->pts, audio_st->time_base, audio_st->codec->time_base);

       ret = avcodec_encode_audio2(c, &pkt, aframe, &got_packet);

       if (ret < 0) {
           fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
           exit(1);
       }
       swr_free(&swr);
       if (got_packet)
       {
           pkt.stream_index = audio_st->index;

           pkt.pts = av_rescale_q(pkt.pts, audio_st->codec->time_base, audio_st->time_base);
           pkt.dts = av_rescale_q(pkt.dts, audio_st->codec->time_base, audio_st->time_base);

           // Write the compressed frame to the media file.
          ret = av_interleaved_write_frame(oc, &pkt);
          if (ret != 0) {
               fprintf(stderr, "Error while writing audio frame: %s\n",
                       av_err2str(ret));
               exit(1);
           }

    }