Recherche avancée

Médias (1)

Mot : - Tags -/livre électronique

Autres articles (95)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (5933)

  • ffmpeg python subprocess error only on ubuntu

    10 décembre 2018, par Wonger

    Im working on an application that is splitting videos from youtube into images. I work on a macbook pro for development, but our app servers run on an ubuntu 12.04 server. The current code on our servers running right now is the following

    ffmpeg -i {video_file} -vf fps={fps}

    which we run via a subprocess Popen function call in python. This has very slow performance because it is essentially playing back the whole video file to get the frames. I found another SO post that said you could use -accurate_seek -ss to grab single frames at a specific time, but I am facing some issues with that. When i run the command via command line when SSH into our dev server, it works fine, but when i run it via a subprocess Popen call in python, i get the following error :

    (standard_in) 1: syntax error
    ffmpeg version 3.2.4-static http://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 5.4.1 (Debian 5.4.1-5) 20170205
     configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-5 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
     libavutil      55. 34.101 / 55. 34.101
     libavcodec     57. 64.101 / 57. 64.101
     libavformat    57. 56.101 / 57. 56.101
     libavdevice    57.  1.100 / 57.  1.100
     libavfilter     6. 65.100 /  6. 65.100
     libswscale      4.  2.100 /  4.  2.100
     libswresample   2.  3.100 /  2.  3.100
     libpostproc    54.  1.100 / 54.  1.100
    Option accurate_seek (enable/disable accurate seeking with -ss) cannot be applied to output url test.mkv -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.
    Error parsing options for output file test.mkv.
    Error opening output files: Invalid argument

    python code :

    import subprocess
    command = "for i in {0..3} ; do ffmpeg -accurate_seek -ss `echo $i*60.0 | bc` -i test.mkv -frames:v 1 images/test_img_$i.jpg ; done"
    process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr = subprocess.STDOUT, shell=True)
    output = process.communicate()
    print output[0].replace('\\n' , '\n')

    The thing is, when i run this on my OSX terminal, it works fine, but there is some issue with doing it in ubuntu. Does anyone have experience with this issue ?

    Output when i run the same exact code in osx :

    ffmpeg version 3.0.2 Copyright (c) 2000-2016 the FFmpeg developers
     built with Apple LLVM version 7.3.0 (clang-703.0.31)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/3.0.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libxvid --enable-libvpx --enable-vda
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libavresample   3.  0.  0 /  3.  0.  0
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
     Input #0, matroska,webm, from 'test.mkv':
     Metadata:
       COMPATIBLE_BRANDS: iso6avc1mp41
       MAJOR_BRAND     : dash
       MINOR_VERSION   : 0
       ENCODER         : Lavf57.25.100
     Duration: 00:02:10.20, start: 0.007000, bitrate: 2729 kb/s
       Stream #0:0: Video: h264 (High), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
       Metadata:
         HANDLER_NAME    : VideoHandler
         DURATION        : 00:02:10.172000000
       Stream #0:1(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
       Metadata:
         DURATION        : 00:02:10.201000000
         [swscaler @ 0x7f9b4b008000] deprecated pixel format used, make sure you did set range correctly
    Output #0, image2, to 'images/test_img_0.jpg':
     Metadata:
       COMPATIBLE_BRANDS: iso6avc1mp41
       MAJOR_BRAND     : dash
       MINOR_VERSION   : 0
       encoder         : Lavf57.25.100
       Stream #0:0: Video: mjpeg, yuvj420p(pc), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 23.98 fps, 23.98 tbn, 23.98 tbc (default)
       Metadata:
         HANDLER_NAME    : VideoHandler
         DURATION        : 00:02:10.172000000
         encoder         : Lavc57.24.102 mjpeg
       Side data:
         unknown side data type 10 (24 bytes)
         Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
     Press [q] to stop, [?] for help
    frame=    1 fps=0.0 q=7.2 Lsize=N/A time=00:00:00.04 bitrate=N/A speed=   1x    
    video:73kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
    ....

    One thing i forgot to mention is that on my mac i used homebrew to install ffmpeg, where as on Ubuntu i used a static build

    other stack overflow reference : Fastest way to extract frames using ffmpeg ?

  • Use libav to copy raw H264 stream as mp4 without codec

    11 décembre 2018, par cKt 1010

    Since my platform didn’t include libx264 some I can’t use H264 codec in libav.
    I know this question is similar to Raw H264 frames in mpegts container using libavcodec. But trust me, I test bob2’s method but didn’t work.
    There are some problem here :

    1. How to set PTS and DTS ?

    If I use setting below which was bob2 told, Libav will print : "Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly"

    packet.flags |= AV_PKT_FLAG_KEY;
    packet.pts = packet.dts = 0;

    So should I use frame rate to calculate it manually?

    1. How to set PPS and SPS ?

    bob2 didn’t told it in his code, but it seems we can’t skip this step. Someone told me that it should be set to extradata which is in AVCodecContext struct. But what is formate ? should it include H264 header ?

    1. Should we delete 0x00 0x00 0x00 0x01 header one by one ?

    Seems we must delete H264 header for every H264 frame. But it cost time anyway. Should we must do it ?

    My code is mess (I tried to many method, lost in it now...). I past it below, and hope not confuse you.

    Init :

    AVOutputFormat *fmt;
    AVFormatContext *oc;
    AVCodec *audio_codec = NULL, *video_codec = NULL;
    Int32 ret;

    assert(video_st != NULL);
    assert(audio_st != NULL);
    /* Initialize libavcodec, and register all codecs and formats. */
    av_register_all();
    /* allocate the output media context */
    printf("MediaSave: save file to %s", pObj->filePath);
    avformat_alloc_output_context2(&oc, NULL, NULL, pObj->filePath);
    if (!oc) {
       Vps_printf(
           "Could not deduce output format from file extension: using MPEG.");
       avformat_alloc_output_context2(&oc, NULL, "mpeg", pObj->filePath);
    }
    if (!oc)
       return SYSTEM_LINK_STATUS_EFAIL;
    pObj->oc = oc;

    fmt = oc->oformat;
    fmt->video_codec = AV_CODEC_ID_H264;
    Vps_printf("MediaSave: codec is %s", fmt->name);
    /* Add the video streams using the default format codecs
    * and initialize the codecs. */
    if ((fmt->video_codec != AV_CODEC_ID_NONE) &&
       (pObj->formate_type & MEDIA_SAVE_TYPE_VIDEO)) {
       add_stream(video_st, oc, &video_codec, fmt->video_codec);
       //open_video(oc, video_codec, video_st);
       pObj->video_st = video_st;
       pObj->video_codec = video_codec;
    }
    if ((fmt->audio_codec != AV_CODEC_ID_NONE) &&
       (pObj->formate_type & MEDIA_SAVE_TYPE_AUDIO)) {
       add_stream(audio_st, oc, &audio_codec, fmt->audio_codec);
       //open_audio(oc, audio_codec, audio_st);
       pObj->audio_codec = audio_codec;
       pObj->audio_st = audio_st;
    }

    /* open the output file, if needed */
    if (!(fmt->flags & AVFMT_NOFILE)) {
       ret = avio_open(&oc->pb, pObj->filePath, AVIO_FLAG_WRITE);
       if (ret < 0) {
           Vps_printf("Could not open '%s': %s", pObj->filePath,
                   av_err2str(ret));
           return SYSTEM_LINK_STATUS_EFAIL;
       }
    }

    Write h264

    /* Write the stream header, if any. */
    ret = avformat_write_header(oc, NULL);

    int nOffset = 0;
    int  nPos =0;
    uint8_t sps_pps[4] = { 0x00, 0x00, 0x00, 0x01 };
    while(1)
    {
       AVFormatContext *oc = pObj->oc;
       nPos = ReadOneNaluFromBuf(&naluUnit, bitstreamBuf->bufAddr + nOffset, bitstreamBuf->fillLength - nOffset);
       if(naluUnit.type == 7 || naluUnit.type == 8) {
           Vps_printf("Get type 7 or 8, Store it to extradata");
           video_st->st->codec->extradata_size = naluUnit.size + sizeof(sps_pps);
           video_st->st->codec->extradata = OSA_memAllocSR(OSA_HEAPID_DDR_CACHED_SR1, naluUnit.size + AV_INPUT_BUFFER_PADDING_SIZE, 32U);
           memcpy(video_st->st->codec->extradata, sps_pps , sizeof(sps_pps));
           memcpy(video_st->st->codec->extradata + sizeof(sps_pps), naluUnit.data, naluUnit.size);
           break;
       }
       nOffset += nPos;
       write_video_frame(oc, video_st, naluUnit);
       if (nOffset >= bitstreamBuf->fillLength) {
           FrameCounter++;
           break;
       }
    }

    static Int32 write_video_frame(AVFormatContext *oc, OutputStream *ost,
       NaluUnit bitstreamBuf) {
    Int32 ret;
    static Int32 waitkey = 1, ptsInc = 0;

    if (0 > ost->st->index) {
       Vps_printf("Stream index less than 0");
       return SYSTEM_LINK_STATUS_EFAIL;
    }
    AVStream *pst = oc->streams[ost->st->index];

    // Init packet
    AVPacket pkt;
    av_init_packet(&pkt);
    pkt.flags |= (0 >= getVopType(bitstreamBuf.data, bitstreamBuf.size))
       ? AV_PKT_FLAG_KEY
       : 0;
    pkt.stream_index = pst->index;

    // Wait for key frame
    if (waitkey) {
       if (0 == (pkt.flags & AV_PKT_FLAG_KEY)){
           return SYSTEM_LINK_STATUS_SOK;
       }
       else {
           waitkey = 0;
           Vps_printf("First frame");
       }
    }

    pkt.pts = (ptsInc) * (90000 / STREAM_FRAME_RATE);
    pkt.dts = (ptsInc) * (90000/STREAM_FRAME_RATE);

    pkt.duration = 3000;
    pkt.pos = -1;
    ptsInc++;
    ret = av_interleaved_write_frame(oc, &pkt);
    if (ret < 0) {
       Vps_printf("cannot write frame");
    }
    return SYSTEM_LINK_STATUS_SOK;

    }

  • FFmpeg check audio channels for silence

    10 février 2019, par Tina J

    I have two .mp4 files, both having 8 (7.1) audio channels. But in fact, I’ve been told that one has a stereo audio channel + 2 SAP (secondary audio on channels 7-8), and the other one has 6 (5.1) audio channels + 2 SAP (on channels 7-8). So basically the later one has some [real] audio channels such as Center channel where that doesn’t exist in the former stereo one (although it has those channels, but apparently they are silent/mute).

    I’ve been trying to see some differentiating metadata to somehow differentiate the two using Mediainfo, but the metadata for both look exactly the same. Also tried some basic metadata retrieval with ffmpeg and ffprobe, again they both look the same - no luck :

    ffprobe -i 2ch.mp4 -show_streams -select_streams a:0

    So the question is : Does ffmpeg or ffprobe have any quick ways to differentiate those two ? Are there any audio filters that can detect if a specific audio channel is silent or not ? Or any other differentiating metadata ? I would prefer differentiating the two through some metadata than content analysis.

    This is a sample of 2-channel mp4 file, and this one is a sample of the 6-channel mp4.