Recherche avancée

Médias (0)

Mot : - Tags -/latitude

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (78)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (13657)

  • FFmpeg's -movflags produces invalid duration in [moov] box

    31 août 2016, par LYF

    I am trying to make an MP4 file suitable for HTML5 streaming. I am trying to mux an FLV file into a fragmented MP4, with correct metadata in its first moov box.

    I used the command line parameters in this article on the MDN :

    ffmpeg -i h264aac.flv -c copy -movflags frag_keyframe+empty_moov fragmented.mp4

    Then I fed this fragmented.mp4 slowly into an HTML5 SourceBuffer. The video played for one second and stopped. (stopped at its second key frame ?)

    I looked at the MP4 file in Bento’s mp4info tool and found that there is an incorrect duration in the moov box :

    Movie:
    duration:   0 ms
    time scale: 1000
    fragments:  yes

    Then I tried :

    ffmpeg -i h264aac.flv -c copy -movflags frag_keyframe+faststart new_fragmented.mp4

    However, the new_fragmented.mp4 only has 5 seconds. It should have 90 seconds.

    > mp4info new_fragmented.mp4

    Movie:
    duration:   5182 ms
    time scale: 1000
    fragments:  yes

    I also found on stackoverflow a working movflags used in live streaming :

    -movflags empty_moov+omit_tfhd_offset+frag_keyframe+default_base_moof

    Now the video can be played, but the player does not know the video’s duration until the video is completely downloaded.

    > mp4info stackoverflowSolution.mp4

    duration:   0 ms
    fragments:  yes

    Codecs String: avc1.640028
    Codecs String: mp4a.40.2

    My goal is make an MP4 file that can be played and has a correct duration when I stream it. I tried switching parameters, adding and removing + signs, but there are too many combinations, and I was not lucky enough to run into a working one by guessing.

    I can generate a fully playable and glitch-less MP4 file using mp4fragment, but I would like to know how to do so in FFmpeg.

  • Convert ffmpeg frame into array of YUV pixels in C

    9 juin 2016, par loneraver

    I’m using the ffmpeg C libraries and trying to convert an AVFrame into a 2d array of pixels with YUV* components for analysis. I figured out how to convert the Y component for each pixel. :

    uint8_t y_val = pFrame->data[0][pFrame->linesize[0] * y + x];

    Since all frames have a Y component this is easy. However most digital video do not have a 4:4:4 chroma subsampling, so getting the UV components is stumping me.

    I’m using straight C for this project. No C++. An ideas ?

    *Note : Yes, I know it’s technically YCbCr and not YUV.

    Edit :

    I’m rather new to C so it might not be the prettiest code out there.

    When I try :

    VisYUVFrame *VisCreateYUVFrame(const AVFrame *pFrame){
       VisYUVFrame *tmp = (VisYUVFrame*)malloc(sizeof(VisYUVFrame));
       if(tmp == NULL){ return NULL;}
       tmp->height = pFrame->height;
       tmp->width = pFrame->width;

       tmp->data = (PixelYUV***)malloc(sizeof(PixelYUV**) * pFrame->height);
       if(tmp->data == NULL) { return NULL;};

       for(int y = 0; y < pFrame->height; y++){
           tmp->data[y] = (PixelYUV**)malloc(sizeof(PixelYUV*) * pFrame->width);
           if(tmp->data[y] == NULL) { return NULL;}

           for(int x = 0; x < pFrame->width; x++){
               tmp->data[y][x] = (PixelYUV*)malloc(sizeof(PixelYUV*));
               if(tmp->data[y][x] == NULL){ return NULL;};
               tmp->data[y][x]->Y = pFrame->data[0][pFrame->linesize[0] * y + x];
               tmp->data[y][x]->U = pFrame->data[1][pFrame->linesize[1] * y + x];
               tmp->data[y][x]->V = pFrame->data[2][pFrame->linesize[2] * y + x];

           }
       }

       return tmp;

    Luma works but when I run Valgrind, I get

    0x26
    1
    InvalidRead
    Invalid read of size 1

    0x100003699
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    VisCreateYUVFrame
    /Users/hborcher/ClionProjects/borcherscope/lib
    visualization.c
    145

    0x100006B5B
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    render
    /Users/hborcher/ClionProjects/borcherscope/lib/decoder
    simpleDecoder2.c
    253

    0x100002D24
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    main
    /Users/hborcher/ClionProjects/borcherscope/src
    createvisual2.c
    93

    Address 0x10e9f91ef is 0 bytes after a block of size 92,207 alloc’d

    0x100013EEA
    /usr/local/Cellar/valgrind/3.11.0/lib/valgrind/vgpreload_memcheck-amd64-darwin.so
    malloc_zone_memalign

    0x1084B5416
    /usr/lib/system/libsystem_malloc.dylib
    posix_memalign

    0x10135D317
    /usr/local/Cellar/ffmpeg/3.0.2/lib/libavutil.55.17.103.dylib
    av_malloc

    0x27
    1
    InvalidRead
    Invalid read of size 1

    0x1000036BA
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    VisCreateYUVFrame
    /Users/hborcher/ClionProjects/borcherscope/lib
    visualization.c
    147

    0x100006B5B
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    render
    /Users/hborcher/ClionProjects/borcherscope/lib/decoder
    simpleDecoder2.c
    253

    0x100002D24
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    main
    /Users/hborcher/ClionProjects/borcherscope/src
    createvisual2.c
    93

    Address 0x10e9f91ef is 0 bytes after a block of size 92,207 alloc’d

    0x100013EEA
    /usr/local/Cellar/valgrind/3.11.0/lib/valgrind/vgpreload_memcheck-amd64-darwin.so
    malloc_zone_memalign

    0x1084B5416
    /usr/lib/system/libsystem_malloc.dylib
    posix_memalign

    0x10135D317
    /usr/local/Cellar/ffmpeg/3.0.2/lib/libavutil.55.17.103.dylib
    av_malloc

  • ffmpeg : is vidstab multithreaded, and/or is there a way to make it perform better on a very high # of cores ?

    30 juillet 2016, par ljwobker

    I’m working on a project where I use the vidstab ffmpeg plugin to stabilize some videos. I’m lucky enough to have access to an extremely fast x86 machine (2 socket, 16 core, 32 thread) but I can’t seem to keep it busy and I’m trying to figure out if it’s a limitation of the toolchain or the config/commands. The workflow is basically 3 steps :

    1. crop the video in terms of both time and dimension (ffmpeg "crop"
      filter)
    2. run vidstabdetect to identify the transformation corrections
    3. run vidstabtransform to apply the transformation and output the
      final video

    When I run the transcode script on this machine, the "crop" pass executes extremely fast, and the htop output from the machine clearly shows all 32 cores(threads) running at nearly 100%.

    When I run the pass with vidstabdetect, htop clearly shows one core running at/near 100%, with all of the other cores hovering in the "few percent" range, and total CPU utilization for the parent PID hovers near 130%. This leads me to believe there must be only one main processing thread, but also several other smaller threads that are consuming at least some parallel time.

    the vidstabtransform pass looks similar, with one core constantly near 100%, and the rest of the cores hovering in the few percent.

    As far as I can tell, there is no way to parallelize the two vidstab processes, as the transform step is completely dependent on the results of the detection pass. There is a single pass option described in the vidstab docs, but the quality isn’t as good so I’m trying to avoid that.