Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (34)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (6793)

  • Create MPEG-DASH Initialization segment

    5 janvier 2016, par Mahout

    I am looking to convert between HLS and MPEG Dash. I do not access to the original fully concatenated video file, only the individual HLS segments.

    In doing this transformation to MPEG Dash I need to supply an initialziation segment for the Dash manifest .mpd file.

    My questions are :

    1. What is the structure of a Dash video initialization segment ?
    2. How can I generate/create one without the need for the original full file ?

    Perhaps a solution would involve getting MP4Box to convert the ’.ts’ HLS segments to Dash ’.m4s’ segments which are self initializing, but I am unsure how to go about this this ?

    Any ideas are much appreciated.

    Many thanks.

    UPDATE :
    Snippet to stream using original hls segments. Video plays all the way through but is just black.

     <representation width="426" height="238" framerate="25" bandwidth="400000">
       <segmentlist timescale="25000" duration="112500">
              <segmenturl media="video_0_400000/hls/segment_0.ts"></segmenturl>
              <segmenturl media="video_0_400000/hls/segment_1.ts"></segmenturl>
             <segmenturl media="video_0_400000/hls/segment_2.ts"></segmenturl>
       </segmentlist>
      </representation>
  • bash : receive single frames from ffmpeg pipe

    30 août 2014, par manu

    I’m trying to achieve single-frame handling in a pipe where the the j2c encoder "kdu_compress" (Kakadu) only accepts single files. To save harddrive space. I didn’t manage to pipe frames directly, so I’m trying to handle them via a bash script, by creating each picture, process it, and overwrite it with the next.

    Here is my approach. Thanks for your advice, I really want to climb this mountain, though I’m a bit fresh here thanks.


    Is it possible to pipe an ffmpeg output to a bash script and save the individual frame,
    do further commands with the file before the next frame is handled ?

    Best result so far is, that ALL frames are added into the intermediate file, without recognizing the end of a frame.

    I used this ffmpeg setting to pipe, example with .ppm :

    ffmpeg -y  -i "/path/to/source.mov" -an -c:v ppm -updatefirst 1 -f image2 - \
    | /path/to/receiver.sh

    and this script as a receiver.sh

    #!/bin/bash  

    while read a;
    do
       cat /dev/null > "/path/to/tempfile.ppm"; #to empty the file first
       cat $a >> "/path/to/tempfile.ppm";        #to fill one picture

       kdu_compress -i /path/to/tempfile.ppm -otherparams   #to process this intermediate

    done
    exit;

    Thank you very much.

  • RGB to YUV conversion with libav (ffmpeg) triplicates image

    17 avril 2021, par José Tomás Tocino

    I'm building a small program to capture the screen (using X11 MIT-SHM extension) on video. It works well if I create individual PNG files of the captured frames, but now I'm trying to integrate libav (ffmpeg) to create the video and I'm getting... funny results.

    &#xA;

    The furthest I've been able to reach is this. The expected result (which is a PNG created directly from the RGB data of the XImage file) is this :

    &#xA;

    Expected result

    &#xA;

    However, the result I'm getting is this :

    &#xA;

    Obtained result

    &#xA;

    As you can see the colors are funky and the image appears cropped three times. I have a loop where I capture the screen, and first I generate the individual PNG files (currently commented in the code below) and then I try to use libswscale to convert from RGB24 to YUV420 :

    &#xA;

    while (gRunning) {&#xA;        printf("Processing frame framecnt=%i \n", framecnt);&#xA;&#xA;        if (!XShmGetImage(display, RootWindow(display, DefaultScreen(display)), img, 0, 0, AllPlanes)) {&#xA;            printf("\n Ooops.. Something is wrong.");&#xA;            break;&#xA;        }&#xA;&#xA;        // PNG generation&#xA;        // snprintf(imageName, sizeof(imageName), "salida_%i.png", framecnt);&#xA;        // writePngForImage(img, width, height, imageName);&#xA;&#xA;        unsigned long red_mask = img->red_mask;&#xA;        unsigned long green_mask = img->green_mask;&#xA;        unsigned long blue_mask = img->blue_mask;&#xA;&#xA;        // Write image data&#xA;        for (int y = 0; y &lt; height; y&#x2B;&#x2B;) {&#xA;            for (int x = 0; x &lt; width; x&#x2B;&#x2B;) {&#xA;                unsigned long pixel = XGetPixel(img, x, y);&#xA;&#xA;                unsigned char blue = pixel &amp; blue_mask;&#xA;                unsigned char green = (pixel &amp; green_mask) >> 8;&#xA;                unsigned char red = (pixel &amp; red_mask) >> 16;&#xA;&#xA;                pixel_rgb_data[y * width &#x2B; x * 3] = red;&#xA;                pixel_rgb_data[y * width &#x2B; x * 3 &#x2B; 1] = green;&#xA;                pixel_rgb_data[y * width &#x2B; x * 3 &#x2B; 2] = blue;&#xA;            }&#xA;        }&#xA;&#xA;        uint8_t* inData[1] = { pixel_rgb_data };&#xA;        int inLinesize[1] = { in_w };&#xA;&#xA;        printf("Scaling frame... \n");&#xA;        int sliceHeight = sws_scale(sws_context, inData, inLinesize, 0, height, pFrame->data, pFrame->linesize);&#xA;&#xA;        printf("Obtained slice height: %i \n", sliceHeight);&#xA;        pFrame->pts = framecnt * (pVideoStream->time_base.den) / ((pVideoStream->time_base.num) * 25);&#xA;&#xA;        printf("Frame pts: %li \n", pFrame->pts);&#xA;        int got_picture = 0;&#xA;&#xA;        printf("Encoding frame... \n");&#xA;        int ret = avcodec_encode_video2(pCodecCtx, &amp;pkt, pFrame, &amp;got_picture);&#xA;&#xA;//                int ret = avcodec_send_frame(pCodecCtx, pFrame);&#xA;&#xA;        if (ret != 0) {&#xA;            printf("Failed to encode! Error: %i\n", ret);&#xA;            return -1;&#xA;        }&#xA;&#xA;        printf("Succeed to encode frame: %5d - size: %5d\n", framecnt, pkt.size);&#xA;&#xA;        framecnt&#x2B;&#x2B;;&#xA;&#xA;        pkt.stream_index = pVideoStream->index;&#xA;        ret = av_write_frame(pFormatCtx, &amp;pkt);&#xA;&#xA;        if (ret != 0) {&#xA;            printf("Error writing frame! Error: %framecnt \n", ret);&#xA;            return -1;&#xA;        }&#xA;&#xA;        av_packet_unref(&amp;pkt);&#xA;    }&#xA;

    &#xA;

    I've placed the entire code at this gist. This question right here looks pretty similar to mine, but not quite, and the solution did not work for me, although I think this has something to do with the way the line stride is calculated.

    &#xA;