Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (24)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (5912)

  • ffmpeg + AWS Lambda issues. Won't compress full video

    7 juillet 2022, par Joesph Stah Lynn

    So I followed this tutorial to set everything up, and changed the function a bit to compress video, but no matter what I try, on larger videos (basically anything over 50-100MB), the output file will always be cut short, and depending on the encoding settings I'm using, will be cut by different amounts. I tried using the solution found here, adding a -nostdin flag to my ffmpeg command, but that also didn't seem to fix the issue.
    
Another odd thing, is no matter what I try, if I remove the '-f mpegts' flag, the output video will be 0B.
    
My Lambda function is set up with 3008MB of Memory (submitted a ticket to get my limit upped so I can use the full 10240MB available), and 2048MB of Ephemeral storage (I honestly am not sure if I need anything more than the minimum 512, but I upped it to try and fix the issue). When I check my cloudwatch logs, on really large files, it will occasionally time out, but other than that, I will get no error messages, just the standard start, end, and billable time messages.

    


    This is the code for my lambda function.

    


    import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "rw-video-out"
SIGNED_URL_TIMEOUT = 600

def lambda_handler(event, context):

    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = s3_source_basename + "-comp.mp4"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)

    ffmpeg_cmd = f"/opt/bin/ffmpeg -nostdin -i {s3_source_signed_url} -f mpegts libx264 -preset fast -crf 28 -c:a copy - "
    command1 = shlex.split(ffmpeg_cmd)
    p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
    s3 = boto3.resource('s3')
    s3.Object(s3_source_bucket,s3_source_key).delete()

    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }


    


    This is the code from the solution I mentioned, but when I try using this code, I get output.mp4 not found errors

    


    def lambda_handler(event, context):
    print(event)
    os.chdir('/tmp')
    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = s3_source_basename + ".mp4"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)
    print(s3_source_signed_url)
    s3_client.download_file(s3_source_bucket,s3_source_key,s3_source_key)
    # ffmpeg_cmd = "/opt/bin/ffmpeg -framerate 25 -i \"" + s3_source_signed_url + "\" output.mp4 "
    ffmpeg_cmd = f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4 "
    # command1 = shlex.split(ffmpeg_cmd)
    # print(command1)
    os.system(ffmpeg_cmd)
    # os.system('ls')
    # p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    file = 'output.mp4'
    resp = s3_client.put_object(Body=open(file,"rb"), Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
    # resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
    s3 = boto3.resource('s3')
    s3.Object(s3_source_bucket,s3_source_key).delete()
    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }


    


    Any help would be greatly appreciated.

    


  • Decoding MediaRecorder produced webm stream

    15 août 2019, par sgmg

    I am trying to decode a video stream from the browser using the ffmpeg API. The stream is produced by the webcam and recorded with MediaRecorder as webm format. What I ultimately need is a vector of opencv cv::Mat objects for further processing.

    I have written a C++ webserver using the uWebsocket library. The video stream is sent via websocket from the browser to the server once per second. On the server, I append the received data to my custom buffer and decode it with the ffmpeg API.

    If I just save the data on the disk and later I play it with a media player, it works fine. So, whatever the browser sends is a valid video.

    I do not think that I correctly understand how should the custom IO behave with network streaming as nothing seems to be working.

    The custom buffer :

    struct Buffer
       {
           std::vector data;
           int currentPos = 0;
       };

    The readAVBuffer method for custom IO

    int MediaDecoder::readAVBuffer(void* opaque, uint8_t* buf, int buf_size)
    {
       MediaDecoder::Buffer* mbuf = (MediaDecoder::Buffer*)opaque;
       int count = 0;
       for(int i=0;icurrentPos;
           if(index >= (int)mbuf->data.size())
           {
               break;
           }
           count++;
           buf[i] = mbuf->data.at(index);
       }
       if(count > 0) mbuf->currentPos+=count;

       std::cout << "read : "<currentPos<<", buff size:"<data.size() << std::endl;
       if(count <= 0) return AVERROR(EAGAIN); //is this error that should be returned? It cannot be EOF since we're not done yet, most likely
       return count;
    }

    The big decode method, that’s supposed to return whatever frames it could read

    std::vector MediaDecoder::decode(const char* data, size_t length)
    {
       std::vector frames;
       //add data to the buffer
       for(size_t i=0;i/do not invoke the decoders until we have 1MB of data
       if(((buf.data.size() - buf.currentPos) < 1*1024*1024) && !initializedCodecs) return frames;

       std::cout << "decoding data length "</initialize ffmpeg objects. Custom I/O, format, decoder, etc.
       {      
           //these are just members of the class
           avioCtxPtr = std::unique_ptr(
                       avio_alloc_context((uint8_t*)av_malloc(4096),4096,0,&buf,&readAVBuffer,nullptr,nullptr),
                       avio_context_deleter());
           if(!avioCtxPtr)
           {
               std::cerr << "Could not create IO buffer" << std::endl;
               return frames;
           }                

           fmt_ctx = std::unique_ptr(avformat_alloc_context(),
                                                                             avformat_context_deleter());
           fmt_ctx->pb = avioCtxPtr.get();
           fmt_ctx->flags |= AVFMT_FLAG_CUSTOM_IO ;
           //fmt_ctx->max_analyze_duration = 2 * AV_TIME_BASE; // read 2 seconds of data
           {
               AVFormatContext *fmtCtxRaw = fmt_ctx.get();            
               if (avformat_open_input(&fmtCtxRaw, "", nullptr, nullptr) < 0) {
                   std::cerr << "Could not open movie" << std::endl;
                   return frames;
               }
           }
           if (avformat_find_stream_info(fmt_ctx.get(), nullptr) < 0) {
               std::cerr << "Could not find stream information" << std::endl;
               return frames;
           }
           if((video_stream_idx = av_find_best_stream(fmt_ctx.get(), AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0)) < 0)
           {
               std::cerr << "Could not find video stream" << std::endl;
               return frames;
           }
           AVStream *video_stream = fmt_ctx->streams[video_stream_idx];
           AVCodec *dec = avcodec_find_decoder(video_stream->codecpar->codec_id);

           video_dec_ctx = std::unique_ptr (avcodec_alloc_context3(dec),
                                                                                 avcodec_context_deleter());
           if (!video_dec_ctx)
           {
               std::cerr << "Failed to allocate the video codec context" << std::endl;
               return frames;
           }
           avcodec_parameters_to_context(video_dec_ctx.get(),video_stream->codecpar);
           video_dec_ctx->thread_count = 1;
          /* video_dec_ctx->max_b_frames = 0;
           video_dec_ctx->frame_skip_threshold = 10;*/

           AVDictionary *opts = nullptr;
           av_dict_set(&opts, "refcounted_frames", "1", 0);
           av_dict_set(&opts, "deadline", "1", 0);
           av_dict_set(&opts, "auto-alt-ref", "0", 0);
           av_dict_set(&opts, "lag-in-frames", "1", 0);
           av_dict_set(&opts, "rc_lookahead", "1", 0);
           av_dict_set(&opts, "drop_frame", "1", 0);
           av_dict_set(&opts, "error-resilient", "1", 0);

           int width = video_dec_ctx->width;
           videoHeight = video_dec_ctx->height;

           if(avcodec_open2(video_dec_ctx.get(), dec, &opts) < 0)
           {
               std::cerr << "Failed to open the video codec context" << std::endl;
               return frames;
           }

           AVPixelFormat  pFormat = AV_PIX_FMT_BGR24;
           img_convert_ctx = std::unique_ptr(sws_getContext(width, videoHeight,
                                            video_dec_ctx->pix_fmt,   width, videoHeight, pFormat,
                                            SWS_BICUBIC, nullptr, nullptr,nullptr),swscontext_deleter());

           frame = std::unique_ptr(av_frame_alloc(),avframe_deleter());
           frameRGB = std::unique_ptr(av_frame_alloc(),avframe_deleter());


           int numBytes = av_image_get_buffer_size(pFormat, width, videoHeight,32 /*https://stackoverflow.com/questions/35678041/what-is-linesize-alignment-meaning*/);
           std::unique_ptr imageBuffer((uint8_t *) av_malloc(numBytes*sizeof(uint8_t)),avbuffer_deleter());
           av_image_fill_arrays(frameRGB->data,frameRGB->linesize,imageBuffer.get(),pFormat,width,videoHeight,32);
           frameRGB->width = width;
           frameRGB->height = videoHeight;

           initializedCodecs = true;
       }    
       AVPacket pkt;
       av_init_packet(&pkt);
       pkt.data = nullptr;
       pkt.size = 0;

       int read_frame_return = 0;
       while ( (read_frame_return=av_read_frame(fmt_ctx.get(), &pkt)) >= 0)
       {
           readFrame(&frames,&pkt,video_dec_ctx.get(),frame.get(),img_convert_ctx.get(),
                     videoHeight,frameRGB.get());
           //if(cancelled) break;
       }
       avioCtxPtr->eof_reached = 0;
       avioCtxPtr->error = 0;


       //flush
      // readFrame(frames.get(),nullptr,video_dec_ctx.get(),frame.get(),
        //         img_convert_ctx.get(),videoHeight,frameRGB.get());

       avioCtxPtr->eof_reached = 0;
       avioCtxPtr->error = 0;

       if(frames->size() <= 0)
       {
           std::cout << "buffer pos: "<code>

    What I would expect to happen would be for a continuous extraction of cv::Mat frames as I feed it more and more data. What actually happens is that after the the buffer is fully read I see :

    [matroska,webm @ 0x507b450] Read error at pos. 1278266 (0x13813a)
    [matroska,webm @ 0x507b450] Seek to desired resync point failed. Seeking to earliest point available instead.

    And then no more bytes are read from the buffer even if later I increase the size of it.

    There is something terribly wrong I’m doing here and I don’t understand what.

  • FFMPEG concat video throws No Such Filter error

    5 janvier 2021, par Rohan Shah

    I am trying to concat three videos using FFMPEG,

    


    This is my command that I am executing using Java Runtime

    


        String command = "ffmpeg -i url_to_video -i url_to_video -i url_to_video -filter_complex [0:v] [0:a] [1:v] [1:a] [2:v] [2:a] concat=n=3:v=1:a=1 [v] [a] -map [v] -map [a] /home/rohan/output.mp4";

    Process process = Runtime.getRuntime().exec(command.split(" "));


    


    This is throwing me No Such Filter error, whereas if I try to run this via Terminal it works perfectly fine. I tried tweaking the cmd by removing or adding quotes, but any variant of this command that runs on terminal throws error in Java Runtime. While it runs perfectly on the Terminal.

    


    I have tried with Absolute paths and Amazon S3 URLs, while both of them work just fine in terminal, The S3 variant of command throws No Such Filter error and the Absolute path command throws a No Such File Or Directory error during Runtime.

    


    Here is the stack trace of the error

    


    Here is the standard error of the command (if any):

ffmpeg version n4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
  configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-cuda --enable-cuda-sdk --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libnpp --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'https://openxcell-development-public.s3.ap-south-1.amazonaws.com/bhit/outputForBHit.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.45.100
  Duration: 00:00:10.57, start: 0.000000, bitrate: 2048 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1696 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 347 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'https://openxcell-development-public.s3.ap-south-1.amazonaws.com/bhit/outputForBHit.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.45.100
  Duration: 00:00:10.57, start: 0.000000, bitrate: 2048 kb/s
    Stream #1:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1696 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 347 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
Input #2, mov,mp4,m4a,3gp,3g2,mj2, from 'https://openxcell-development-public.s3.ap-south-1.amazonaws.com/bhit/outputForBHit.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.45.100
  Duration: 00:00:10.57, start: 0.000000, bitrate: 2048 kb/s
    Stream #2:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1696 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #2:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 347 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
[AVFilterGraph @ 0x564151595680] No such filter: ''
Error initializing complex filters.
Invalid argument