Recherche avancée

Médias (91)

Autres articles (74)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (4294)

  • mpegvideo : drop support for real (non-emulated) edges

    20 décembre 2013, par Anton Khirnov
    mpegvideo : drop support for real (non-emulated) edges
    

    Several decoders disable those anyway and they are not measurably faster
    on x86. They might be somewhat faster on other platforms due to missing
    emu edge SIMD, but the gain is not large enough (and those decoders
    relevant enough) to justify the added complexity.

    • [DH] libavcodec/mpegvideo.c
    • [DH] libavcodec/mpegvideo.h
    • [DH] libavcodec/mpegvideo_enc.c
    • [DH] libavcodec/mpegvideo_motion.c
    • [DH] libavcodec/mss2.c
    • [DH] libavcodec/rv10.c
    • [DH] libavcodec/rv34.c
    • [DH] libavcodec/svq3.c
    • [DH] libavcodec/vc1dec.c
    • [DH] libavcodec/wmv2.c
  • ffmpeg to generate dash and HLS - best practise

    8 septembre 2017, par LaborC

    Looking for the correct way to encode a given input video in multiple bitrates and then package it for dash and HLS. I thought this is a basic task, but for me it was quite a challenge. So the way I do it is as follows :

    First I split my video (mp4) into video and audio (I encode the audio, because I need to make sure that the output codec is aac, which is a requirement for web I think).

    ffmpeg -c:v copy -an video_na.mp4 -i source_input.mp4
    ffmpeg -c:a aac -ac 2 -async 1 -vn audio.mp4 -i source_input.mp4

    Then I encode the video with the following commands :

       ffmpeg.exe -i video_na.mp4 -an -c:v libx264 -crf 18 \
    -preset fast -profile:v high -level 4.2 -b:v 2000k -minrate 2000k \
    -maxrate 2000k -bufsize 4000k -g 96 -keyint_min 96 -sc_threshold 0 \
    -filter:v "scale='trunc(oh*a/2)*2:576'" -movflags +faststart \
    -pix_fmt yuv420p -threads 4 -f mp4 video-2000k.mp4

       ffmpeg.exe -i video_na.mp4 -an -c:v libx264 -crf 18 \
    -preset fast -profile:v high -level 4.2 -b:v 1500k -minrate 1500k \
    -maxrate 1500k -bufsize 3000k -g 96 -keyint_min 96 -sc_threshold 0 \
    -filter:v "scale='trunc(oh*a/2)*2:480'" -movflags +faststart \
    -pix_fmt yuv420p -threads 4 -f mp4 video-1500k.mp4

    After that I fragment the videos (I used the parameter —timescale 10000 but then the result was out of sync).
    Sidenote : the -g parameter is 4 times 24 (frames). this is important because the fragmentation is 4000 (4 seconds)

    mp4fragment --fragment-duration 4000 video-2000k.mp4 \
    video-2000k-f.mp4

    mp4fragment --fragment-duration 4000 video-1500k.mp4 \
    video-1500k-f.mp4

    And finally package everything together again for dash (I used to use —use-segment-timeline but then again the result was out-of-sync).
    I use mp4dash and not mp4box because I want to be able to encrypt everything later on for DRM.

    mp4dash --media-prefix=out  \
         video-2000k-f.mp4  \
         video-1500k-f.mp4  \
        --out dash

    The result works in Firefox, Chrome, IE Edge via a webserver and via Cloudfront AWS Streaming also on older browsers.

    So for me there are still 2 tasks to accomplish.
    First I need to generate a HLS package for Apple Phone, IPad Users.
    And second : I need to encrypt everything.

    So far my HLS command is :

    ffmpeg -y -loglevel info ^
           -i video-2000k.mp4 \
           -i video-1500k.mp4 \
           -i audio.mp4 \
           -profile:v baseline -start_number 0 -hls_time 10 \
           -flags -global_header -hls_list_size 0 -f hls hls/master.m3u8

    This basically works, but generates only 1 bandwith without the posibility of multi-streams.
    I am not certain about that statement, but it looks that way.
    Has anyone an idea on what I am doing wrong ?

  • libav works with RTP stream on PC, but not on Android (Same RTP stream)

    25 septembre 2016, par Nitay

    I’m using libav to decode video received from a 3rd party. The video is received in an Android app and is then made into an RTP stream and published to another device.

    When I use the PC as the target device, the stream decodes properly and I see video. When I use android (same code, compiled to android) the video does not decode at all.
    This happens only with the video from the 3rd party. Other video streams works okay both on PC and on Android.

    To be clear :

    • If stream is casted from a command line using ffmpeg -> Video is displayed both on Android & on PC
    • If stream is casted from the Android App -> Video is displayed only on PC (the same code, compiled for different platforms)

    libav 11.7 was compiled to android using the following line on configure :

    NDK=/opt/android-ndk-r12b
    SYSROOT="${NDK}/platforms/android-23/arch-arm/"
    ECFLAGS="-march=armv7-a -mfloat-abi=softfp -I /usr/local/include"
    ELDFLAGS="-Wl,--fix-cortex-a8 -L /usr/local/lib"
    ARCH_SPECIFIC="--disable-asm --arch=arm --cpu=armv7-a --cross-prefix=/opt/android-ndk-r12b/prebuilt/linux-x86_64/bin/../../../toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-"

    ./configure \
    ${ARCH_SPECIFIC} \
    --target-os=linux \
    --sysroot="$SYSROOT" \
    --extra-cflags="$ECFLAGS" \
    --extra-ldflags="$ELDFLAGS" \
    --enable-shared \
    --disable-symver

    (—disabled-asm is unfortunately needed to avoid text-relocations in the compiled library which is not allowed on Android)

    Here are the libav logs from the android side : http://pastebin.com/MDE3N7BD

    The logs starting with LIBAV are libav messaged, the ones without are my own messages wrapped around the libav functions.

    Logs from the PC side : http://pastebin.com/N0Fd18F9

    The loop that reads the frames :

           // If read frame fails (which happens), keep trying
           LOG_DEBUG("Before read frame");
           while (av_read_frame(formatContext, &packet) >= 0 && !terminate)
           {
               LOG_DEBUG1("Packet read. Size: %d", packet.size);

               this->DecodeFrame(videoStreamIndex, formatContext->streams[videoStreamIndex]->codec, &packet);

               av_free_packet(&packet);
               av_init_packet(&packet);
           }

    And here’s the frames decoding code :

    void VideoDecoder::DecodeFrame(int videoStreamIndex, AVCodecContext* streamCodec, AVPacket* packet)
    {
       static bool save_file = false;

       AVPixelFormat destinationFormat = AV_PIX_FMT_RGBA;
       LOG_DEBUG("Decoding frame!");


       if (this->isFirstFrame)
       {
           LOG_DEBUG("Creating codecs");
           this->isFirstFrame = false;
           // For parsing the packets, we first need to create the right codec
           AVCodec* h264Codec = NULL;
           // (I'm not sure about why does ffmpeg need this. It has an SDP file which states exactly that, but okay)
           h264Codec = avcodec_find_decoder(AV_CODEC_ID_H264);

           // Now make a copy of the codec for us to change
           codecContext = avcodec_alloc_context3(h264Codec);
           avcodec_get_context_defaults3(codecContext, h264Codec);
           avcodec_copy_context(codecContext, streamCodec);


           // Initialize codecContext to use codec
           if (avcodec_open2(codecContext, h264Codec, NULL) >= 0)
           {
               // There's a nasty edge case here that we need to handle first
               if (streamCodec->width == 0 || streamCodec->height == 0)
               {
                   // That means that the stream initialized before any packets were sent to it, we can't initialize
                   // any buffers without knowing their size. So to tackle this we'll initialize the largest buffer
                   // can think of

                   codecContext->width = MAX_RESOLUTION_WIDTH;
                   codecContext->height = MAX_RESOLUTION_HEIGHT;
               }

               // Instantiate new buffers
               int size = avpicture_get_size(AV_PIX_FMT_YUV420P, codecContext->width, codecContext->height);
               originalPic = av_frame_alloc();
               originalPicBuffer = (uint8_t*)(av_malloc(size));

               avpicture_fill((AVPicture*)originalPic, originalPicBuffer, AV_PIX_FMT_YUV420P, codecContext->width, codecContext->height);
           }

           // Instantiate an output context, for usage in the conversion of the picture
           outputFormatContext = avformat_alloc_context();
       }

       if ((packet->stream_index == videoStreamIndex) && !terminate)
       {
           // Packet is video. Convert!

           if (outputStream == NULL)
           {
               //create stream in file
               outputStream = avformat_new_stream(outputFormatContext, streamCodec->codec);
               avcodec_copy_context(outputStream->codec, streamCodec);
               outputStream->sample_aspect_ratio = streamCodec->sample_aspect_ratio;
           }

           int pictureReceived = 0;
           packet->stream_index = outputStream->id;
           int result = avcodec_decode_video2(codecContext, originalPic, &pictureReceived, packet);
           //          std::cout << "Bytes decoded " << result << " check " << check << std::endl;

           if (pictureReceived)
           {
               LOG_DEBUG("New frame received");
               // NOTICE: It is generally not a good practice to allocate on demand instead on initialization.
               // It this case the edge cases demand it (what happens if width==0 on the first packet?)
               if (this->imageConvertContext == NULL)
               {
                   // Allocate pictures and buffers for conversion
                   this->imageConvertContext = sws_getContext(
                       codecContext->width,
                       codecContext->height,
                       codecContext->pix_fmt,
                       codecContext->width,
                       codecContext->height,
                       destinationFormat,
                       SWS_BICUBIC,
                       NULL, NULL, NULL);
               }

               if (this->convertedPic == NULL)
               {
                   int size_rgba = avpicture_get_size(destinationFormat, codecContext->width, codecContext->height);
                   convertedPicBuffer = (uint8_t*)(av_malloc(size_rgba));
                   convertedPic = av_frame_alloc();
                   avpicture_fill((AVPicture*)convertedPic, convertedPicBuffer, destinationFormat, codecContext->width, codecContext->height);
               }

               // Scale the image
               sws_scale(imageConvertContext, originalPic->data, originalPic->linesize, 0, codecContext->height, convertedPic->data, convertedPic->linesize);

               // We have a frame! Callback
               if (frameReadyCallback != NULL)
               {
                   LOG_DEBUG3("Updated frame [width=%d, height=%d, ptr=0x%08x]", codecContext->width, codecContext->height, convertedPic->data[0]);
                   if (save_file)
                   {
                       save_file = false;
                       std::string filename = "/storage/emulated/0/DCIM/saved_file.rgba";
                       save_buffer_to_file((unsigned char*)convertedPic->data[0], codecContext->width * codecContext->height * 4, filename.c_str());
                       LOG_DEBUG("Exported file");
                   }
                   frameReadyCallback((char*)convertedPic->data[0], codecContext->width, codecContext->height);
               }
           }
           else
           {
               LOG_DEBUG("Packet without frame");
           }
       }
    }

    Obviously the stream from the 3rd party is somehow different, and probably from a different encoder. But it works with libav (same version) on PC. What could be the difference on Android causing it to not find the frames ?