Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (53)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (9638)

  • Lambda/ffmpeg timelapse generation - output zero bytes, can't debug ffmpeg

    25 août 2021, par GoOutside

    I am attempting to use an AWS Lambda FFMPEG layer to build a timelapse of static images in an S3 bucket. To begin, I am basing my project off of the tutorial located here.

    


    I can replicate the steps in the tutorial, so I know the FFMPEG layer is working in Lambda. I have replicated the FFMPEG commands on a standalone server, so I know they are correct.

    


    Here is my setup : I have two S3 buckets, lambda-source-bucket and lambda-destination-bucket. The contents of lambda-source-bucket are :

    


    1.jpg
2.jpg
3.jpg
4.jpg
5.jpg
6.jpg
7.jpg
files.txt


    


    The files.txt contains this :

    


    file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/1.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/2.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/3.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/4.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/5.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/6.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/7.jpg'


    


    This is my Lambda function code (in Python) :

    


    import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "lambda-destination-bucket"
SIGNED_URL_TIMEOUT = 60

def lambda_handler(event, context):

    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = "timelapse.mp4"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)

    ffmpeg_cmd = "/opt/bin/ffmpeg -y -r 24 -f concat -safe 0 -protocol_whitelist file,http,tcp,https,tls -I ""https://lambda-source-bucket.s3.us-west-2.amazonaws.com/files.txt"" -c copy -s 1024x576 -vcodec libx264 -"    
command1 = shlex.split(ffmpeg_cmd)
    p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

    resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)

    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }


    


    The trigger for the Lambda function is when a new files.txt file is added to lambda-source-bucket.

    


    So far I have been able to get the trigger to fire, the function supposedly runs without errors (in Cloudwatch), and the function creates a new timelapse.mp4 in the lambda-destination-bucket. But this file is 0 bytes. I see no FFMPEG errors in the Cloudwatch console, though I am not sure I know how to configure my Lambda function code to log FFMPEG errors.

    


    Also : if I'm going about this in a totally wrong way, I'd love to hear feedback. I'm guessing that the concat and files.txt method of looping through https:// is not the most efficient way to do this, but it's the only way I can figure this out so far.

    


    Any help is most sincerely and humbly appreciated.

    


  • how do i create a stereo mp3 file with latest version of ffmpeg ?

    17 juin 2016, par Sean

    I’m updating my code from the older version of ffmpeg (53) to the newer (54/55). Code that did work has now been deprecated or removed so i’m having problems updating it.

    Previously I could create a stereo MP3 file using a sample format called :

    SAMPLE_FMT_S16

    That matched up perfectly with my source stream. This has now been replace with

    AV_SAMPLE_FMT_S16

    Which works fine for mono recordings but when I try to create a stereo MP3 file it bugs out at avcodec_open2 with :

    "Specified sample_fmt is not supported."

    Through trial and error I’ve found that using

    AV_SAMPLE_FMT_S16P

    ...is accepted by avcodec_open2 but when I get through and create the MP3 file the sound is very distorted - it sounds about 2 octaves lower than usual with a massive hum in the background - here’s an example recording :

    http://hosting.ispyconnect.com/example.mp3

    I’ve been told by the ffmpeg guys that this is because I now need to manually deinterleave my byte stream before calling :

    avcodec_fill_audio_frame

    How do I do that ? I’ve tried using the swrescale library without success and i’ve tried manually feeding in L/R data into avcodec_fill_audio_frame but the results i’m getting are sounding exactly the same as without interleaving.

    Here is my code for encoding :

    void add_audio_sample( AudioWriterPrivateData^ data, BYTE* soundBuffer, int soundBufferSize)
    {
       libffmpeg::AVCodecContext* c = data->AudioStream->codec;
       memcpy(data->AudioBuffer + data->AudioBufferSizeCurrent,  soundBuffer, soundBufferSize);
       data->AudioBufferSizeCurrent += soundBufferSize;
       uint8_t* pSoundBuffer = (uint8_t *)data->AudioBuffer;
       DWORD nCurrentSize    = data->AudioBufferSizeCurrent;

       libffmpeg::AVFrame *frame;

       int got_packet;
       int ret;
       int size = libffmpeg::av_samples_get_buffer_size(NULL, c->channels,
                                                 data->AudioInputSampleSize,
                                                 c->sample_fmt, 1);

       while( nCurrentSize >= size)    {

           frame=libffmpeg::avcodec_alloc_frame();
           libffmpeg::avcodec_get_frame_defaults(frame);

           frame->nb_samples = data->AudioInputSampleSize;

           ret = libffmpeg::avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt, pSoundBuffer, size, 1);
           if (ret<0)
           {
               throw gcnew System::IO::IOException("error filling audio");
           }
           //audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;

           libffmpeg::AVPacket pkt = { 0 };
           libffmpeg::av_init_packet(&pkt);

           ret = libffmpeg::avcodec_encode_audio2(c, &pkt, frame, &got_packet);

           if (ret<0)
                   throw gcnew System::IO::IOException("error encoding audio");
           if (got_packet) {
               pkt.stream_index = data->AudioStream->index;

               if (pkt.pts != AV_NOPTS_VALUE)
                   pkt.pts = libffmpeg::av_rescale_q(pkt.pts, c->time_base, c->time_base);
               if (pkt.duration > 0)
                   pkt.duration = av_rescale_q(pkt.duration, c->time_base, c->time_base);

               pkt.flags |= AV_PKT_FLAG_KEY;

               if (libffmpeg::av_interleaved_write_frame(data->FormatContext, &pkt) != 0)
                       throw gcnew System::IO::IOException("unable to write audio frame.");


           }
           nCurrentSize -= size;  
           pSoundBuffer += size;  
       }
       memcpy(data->AudioBuffer, data->AudioBuffer + data->AudioBufferSizeCurrent - nCurrentSize, nCurrentSize);
       data->AudioBufferSizeCurrent = nCurrentSize;

    }

    Would love to hear any ideas - I’ve been trying to get this working for 3 days now :(

  • ffmpeg : adding a stream non-muxed stream with correct codec type tagging

    13 janvier 2021, par Hamish

    In common use, I believe ffmpeg requires inputs to be in a specified muxer format which contains one or more data streams which can be decoded with a codec supported by the demuxer associated with the format. I have a data stream (not audio or video) which is already encoded with a codec but is not muxed. How can I get this stream into the ffmpeg pipeline with the correct codec type assigned so that the muxer knows what to do with it ?

    


    I have tried streaming the data over UDP and specifying the data demuxer. With some combinations I get get it to say it's streaming, I can never get a player to connect, presumable because the output of mpegts is either null or invalid. Command line :

    


    ffmpeg -v verbose ^
-f flv -listen 1 -i rtmp://127.0.0.1:1101 ^
-f data -i udp://127.0.0.1:1300 ^
    -map 0:v -vcodec mpeg2video -map 1:d -f mpegts -mpegts_m2ts_mode 1  udp://localhost:1200


    


    Result (partial) :

    


    Input #0, flv, from 'rtmp://127.0.0.1:1101':
  Metadata:
    encoder         : Lavf58.29.100
  Duration: 00:00:00.00, start: 0.000000, bitrate: N/A
    Stream #0:0: Video: h264 (Constrained Baseline), 1 reference frame, yuv420p(progressive, left), 5760x1080 (5760x1088), 30 fps, 30 tbr, 1k tbn, 60 tbc
    Stream #0:1: Audio: mp3, 48000 Hz, stereo, fltp, 128 kb/s
Input #1, data, from 'udp://127.0.0.1:1300':
  Duration: N/A, start: 0.000000, bitrate: N/A
    Stream #1:0: Data: none
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mpeg2video (native))
  Stream #1:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[h264 @ 00000180d48ae700] Reinit context to 5760x1088, pix_fmt: yuv420p
[graph 0 input from stream 0:0 @ 00000180d489dcc0] w:5760 h:1080 pixfmt:yuv420p tb:1/1000 fr:30/1 sar:0/1
[mpegts @ 00000180d5f64040] Cannot automatically assign PID for stream 1
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 --
[AVIOContext @ 00000180d48b83c0] Statistics: 0 seeks, 0 writeouts
[AVIOContext @ 00000180d4882080] Statistics: 185593 bytes read, 0 seeks
[AVIOContext @ 00000180d5f469c0] Statistics: 204 bytes read, 0 seeks
Conversion failed!


    


    The codec type name is klv, which has the tag KLVA. It is only supported by the mpegts and mxf (de)muxers. I presume there must be a way of getting into the pipeline without having a valid mpegts or mxf stream in the first place otherwise we have a kind of paradox.

    


    I've tried specifying the codec on the input, but it fails validation, I guess because the data demuxer does not support it.

    


    Somehow mp4 files can be muxed from elemental streams (h264 and aac files), but I guess there must be some special case in the code to force the codec type based on the file extension.

    


    I would really love to do this from the command line with a public build but if this is absolutely not possible, I would also welcome some advice about it could be achieved from C++ code.