Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (97)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur

    8 février 2011, par

    La visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
    Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
    Configuration de la boite multimédia
    Dès (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (15995)

  • A 'clean' way to cut an MP4 movie into two sections using FFMPEG ?

    30 août 2020, par Peter in Japan

    I am attempting to use FFMPEG to make a script that can easily split a short MP4 movie with sound into two pieces at a certain point. I've searched through what feels like hundreds of posts to try to find "the" answer, but most of my attempts end up with poor results, broken video files that cause my video play to freeze, and so on. I am attempting to make a script that allows me to easily cut a short anime movie (something grabbed from Twitter or some other short source) and cut it into two sections, so it can be under the 2:20 Twitter time limit, or to cut out some scene I don't want to show my followers.

    


    The issue is that FFMPEG is good at cutting videos into segments, but bad at know where keyframes are, so most videos have two seconds of blank video at the front before some keyframe appears, which looks terrible.

    


    One example I found that works well is below, which cuts any mp4 into a bunch of chunks of n second size (six seconds in the example below). Source and documentation for this is https://moe.vg/3b8eNTs

    


    ffmpeg -i seitokai.mp4 -c:v libx264 -crf 22 -map 0 -segment_time 6 -reset_timestamps 1 -g 30 -sc_threshold 0 -force_key_frames "expr:gte(t,n_forced*1)" -f segment output%03d.mp4

    


    This code works great, at least allowing me to access the "left" side of a video that I want, in this case a six-second segment I want. Can anyone tell me how to accomplish the above, but starting at the 13-second period in said video, so I could get right "right" (not starting from 00:00:00) video cut cleanly ?

    


    Alternately, a single, unified and elegant way to split (re-encode) an MP4 into two segments that forces keyframes from the very beginning of the cut pieces would be wonderful. I can't believe how hard this seems to be.

    


    Thanks in advance for any help you can give ! Greetings from rural Japan !

    


  • libswscale bad dst image pointers cgo

    28 août 2020, par SolskGaer

    I am trying to use libswscale to scale image before encoding to h264 using cgo. Here I wrote a simple demo(sorry for the bad code style, I just want to do quick verification) :

    


    func scale(img []byte, scaleFactor int) {
    input, _, _ := image.Decode(bytes.NewReader(img))
    if a, ok := input.(*image.YCbCr); ok {
        width, height := a.Rect.Dx(), a.Rect.Dy()
        var format C.enum_AVPixelFormat = C.AV_PIX_FMT_YUV420P
        context := C.sws_getContext(C.int(width), C.int(height), format, C.int(width/scaleFactor), C.int(height/scaleFactor), 0, C.int(0x10), nil, nil, nil)
        in := make([]uint8, 0)
        in = append(in, a.Y...)
        in = append(in, a.Cb...)
        in = append(in, a.Cr...)
        stride := []C.int{C.int(width), C.int(width / 2), C.int(width / 2), 0}
        outstride := []C.int{C.int(width / scaleFactor), C.int(width / scaleFactor / 2), C.int(width / scaleFactor / 2), 0}
        out := make([]uint8, width*height/scaleFactor/scaleFactor*3/2)
        C.sws_scale(context, (**C.uint8_t)(unsafe.Pointer(&in[0])), (*C.int)(&stride[0]), 0,
            C.int(height), (**C.uint8_t)(unsafe.Pointer(&out[0])), (*C.int)(&outstride[0]))
        min := image.Point{0, 0}
        max := image.Point{width / scaleFactor, height / scaleFactor}
        output := image.NewYCbCr(image.Rectangle{Min: min, Max: max}, image.YCbCrSubsampleRatio420)
        paneSize := width * height / scaleFactor / scaleFactor
        output.Y = out[:paneSize]
        output.Cb = out[paneSize : paneSize*5/4]
        output.Cr = out[paneSize*5/4:]
        opt := jpeg.Options{
            Quality: 90,
        }
        f, _ := os.Create("img.jpeg")
        jpeg.Encode(f, output, &opt)
    }
}



    


    Everytime I run the code snippet, I got an error saying bad dst image pointers, what is the problem of my code. I am new to cgo, so the code is probably silly to you, I apology for that.
If you have more elegant way to achieve the functionality, I am all ears. Any suggestion would be appreciated.

    


  • ffmpeg av_interleaved_write_frame() : Broken pipe under windows

    7 avril 2016, par Allen

    I am using ffmpeg to convert original media file to rawvideo yuv format, ouputed the yuv to pipe, then my command tool receive the raw yuv as input, do some processing.

    e.g :

    D:\huang_xuezhong\build_win32_VDNAGen>ffmpeg -i test.mkv -c:v rawvideo -s 320x240 -f rawvideo - | my_tool -o output

    every time, when run the command, ffmpeg will dump this av_interleaved_write_frame(): Broken pipe error msg :

    Output #0, rawvideo, to 'pipe:':
     Metadata:
     encoder         : Lavf56.4.101
     Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 320x240 [SAR 120:91 DAR 160:91], q=2-31, 200 kb/s, 24 fps, 24 tbn, 24 tbc (default)
     Metadata:
         encoder         : Lavc56.1.100 rawvideo
     Stream mapping:
         Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
    Press [q] to stop, [?] for help
    av_interleaved_write_frame(): Broken pipe
    frame=    1 fps=0.0 q=0.0 Lsize=     112kB time=00:00:00.04 bitrate=22118.2kbits/s
    video:112kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing o
    verhead: 0.000000%
    Conversion failed!

    in my souce code : it take stdin as the input file, every time, it read a frame size content from it, if the content read is less than a frame size, then continue read, until a frame is fetched, then it use the frame content to generate something.

    int do_work (int jpg_width, int jpg_height)
    {
    int ret = 0;

    FILE *yuv_fp = NULL;
    unsigned char * yuv_buf = NULL;
    int frame_size = 0;
    int count = 0;
    int try_cnt = 0;

    frame_size = jpg_width * jpg_height * 3 / 2;
    va_log (vfp_log, "a frame size:%d\n", frame_size);

    yuv_fp = stdin;

    yuv_buf = (unsigned char *) aligned_malloc_int(
           sizeof(char) * (jpg_width + 1) * (jpg_height + 1) * 3, 128);

    if (!yuv_buf) {
       fprintf (stderr, "malloc yuv buf error\n");
       goto end;
    }

    memset (yuv_buf, 0, frame_size);
    while (1) {

       try_cnt++;
       va_log (vfp_log, "try_cnt is %d\n", try_cnt);

       //MAX_TRY_TIMES = 10
       if (try_cnt > MAX_TRY_TIMES) {
           va_log (vfp_log, "try time out\n");
           break;
       }

       count = fread (yuv_buf + last_pos, 1, frame_size - last_pos, yuv_fp);
       if (last_pos + count < frame_size) {
           va_log (vfp_log, "already read yuv: %d, this time:%d\n", last_pos + count, count);
           last_pos += count;
           continue;
       }

       // do my work here

       memset (yuv_buf, 0, frame_size);
       last_pos = 0;
       try_cnt = 0;
    }

    end:
    if (yuv_buf) {
       aligned_free_int (yuv_buf);
    }

    return ret;
    }

    my log :

    2016/04/05 15:20:38 : a frame size:115200
    2016/04/05 15:20:38 : try_cnt is 1
    2016/04/05 15:20:38 : already read yuv : 49365, this time:49365
    2016/04/05 15:20:38 : try_cnt is 2
    2016/04/05 15:20:38 : already read yuv : 49365, this time:0
    2016/04/05 15:20:38 : try_cnt is 3
    2016/04/05 15:20:38 : already read yuv : 49365, this time:0
    2016/04/05 15:20:38 : try_cnt is 4
    2016/04/05 15:20:38 : already read yuv : 49365, this time:0
    2016/04/05 15:20:38 : try_cnt is 5
    2016/04/05 15:20:38 : already read yuv : 49365, this time:0
    2016/04/05 15:20:38 : try_cnt is 6
    2016/04/05 15:20:38 : already read yuv : 49365, this time:0
    2016/04/05 15:20:38 : try_cnt is 7
    2016/04/05 15:20:38 : already read yuv : 49365, this time:0
    2016/04/05 15:20:38 : try_cnt is 8
    2016/04/05 15:20:38 : already read yuv : 49365, this time:0
    2016/04/05 15:20:38 : try_cnt is 9
    2016/04/05 15:20:38 : already read yuv : 49365, this time:0
    2016/04/05 15:20:38 : try_cnt is 10
    2016/04/05 15:20:38 : already read yuv : 49365, this time:0
    2016/04/05 15:20:38 : try_cnt is 11
    2016/04/05 15:20:38 : try time out
    ```

    my question :

    when piping used ,does ffmpeg write content to pipe buffer as soon as it has content, or it will buffer some size content, then flushes them to pipe ?Maybe some internal logic that I misunderstood,any one could help explain or fix my code ?

    PS : this command run ok under linux.