Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (32)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • MediaSPIP Player : les contrôles

    26 mai 2010, par

    Les contrôles à la souris du lecteur
    En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (4234)

  • add watermark to video with FFMPEG and convert to HLS for all resolution

    25 juin 2021, par sina

    I'm using this package : aminyazdanpanah/PHP-FFmpeg-video-streaming. I try to add a watermark to a video and I am using "hls" after adding a watermark method. The problem is the watermark is apply just for one resolution and the other representations does not work.

    


    For example, I use this method :

    


    $video ->filters() ->watermark('sample.png', [ 'position' => 'relative', 'bottom' => 50, 'right' => 50, ]);

$video->hls() ->setFormat($format) ->autoGenerateRepresentations([144,360,480]) ->save('sample.m3u8');


    


    In this case only the default representation is save (depend on video resolution) and other quality not generate.

    


    Is there a way to add watermark to hls (all quality) ?

    


  • Is there any way to switch the resolution on the fly when streaming with RTMP ?

    25 mai 2021, par wwd

    I built nginx with nginx-http-flv-module as the RTMP server. And I used ffmpeg-python to upload the stream. I have searched a lot about "how to switch the resolution on the fly". However, it seems like that nobody do something like that.

    


    So, I did a test by periodically rerunning a new ffmpeg uploading process while the client(opencv-python) keeps receiving the data.
I found that the resolution was switched successfully, but the frame near the switch was often broken. And opencv sometimes exits without any logs.

    


    Is there any way to achieve this ?

    


    Here is the test code of my streamer :

    


    import cv2
import ffmpeg


if __name__ == "__main__":
    cap1 = cv2.VideoCapture("videos/1944x960/video.mp4")
    cap2 = cv2.VideoCapture("videos/972x480/video.mp4")

    pushing_process = None
    count = 0
    hr = True

    while True:
        ret1, frame1 = cap1.read()
        ret2, frame2 = cap2.read()
        if not ret1 or not ret2:
            break

        if hr:
            if pushing_process is None:
                pushing_process = ffmpeg.input("pipe:", format="rawvideo", pix_fmt="bgr24",
                                               s=f"{1944}x{960}") \
                    .output("rtmp://127.0.0.1:1935/myapp/s", pix_fmt="yuv420p", f="flv", vcodec="h264",
                            loglevel="error") \
                    .global_args("-re").run_async(pipe_stdin=True)

            frame = frame1
        else:
            if pushing_process is None:
                pushing_process = ffmpeg.input("pipe:", format="rawvideo", pix_fmt="bgr24",
                                               s=f"{972}x{480}") \
                    .output("rtmp://127.0.0.1:1935/myapp/s", pix_fmt="yuv420p", f="flv", vcodec="h264",
                            loglevel="error") \
                    .global_args("-re").run_async(pipe_stdin=True)
            frame = frame2
        pushing_process.stdin.write(frame.tobytes())
        count += 1
        if count == 60:
            hr = not hr
            pushing_process.stdin.close()
            pushing_process.wait()
            pushing_process = None
            count = 0

    pushing_process.stdin.close()
    pushing_process.wait()



    


    And the client :

    


    import cv2

if __name__ == "__main__":
    cap = cv2.VideoCapture("http://127.0.0.1/live?app=myapp&stream=s")
    while True:
        ret, frame = cap.read()
        if not ret:
            break
        cv2.imshow("frame", cv2.resize(frame, (1944, 960), interpolation=cv2.INTER_CUBIC))
        cv2.waitKey(0)


    


  • avcodec_decode_video2 fails to decode after frame resolution change

    10 avril 2021, par Krzysztof Kansy

    I'm using ffmpeg in Android project via JNI to decode real-time H264 video stream. On the Java side I'm only sending the the byte arrays into native module. Native code is running a loop and checking data buffers for new data to decode. Each data chunk is processed with :

    



    int bytesLeft = data->GetSize();
int paserLength = 0;
int decodeDataLength = 0;
int gotPicture = 0;
const uint8_t* buffer = data->GetData();
while (bytesLeft > 0) {
    AVPacket packet;
    av_init_packet(&packet);
    paserLength = av_parser_parse2(_codecPaser, _codecCtx, &packet.data, &packet.size, buffer, bytesLeft, AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
    bytesLeft -= paserLength;
    buffer += paserLength;

    if (packet.size > 0) {
        decodeDataLength = avcodec_decode_video2(_codecCtx, _frame, &gotPicture, &packet);
    }
    else {
        break;
    }
    av_free_packet(&packet);
}

if (gotPicture) {
// pass the frame to rendering
}


    



    The system works pretty well until incoming video's resolution changes. I need to handle transition between 4:3 and 16:9 aspect ratios. While having AVCodecContext configured as follows :

    



    _codecCtx->flags2|=CODEC_FLAG2_FAST;
_codecCtx->thread_count = 2;
_codecCtx->thread_type = FF_THREAD_FRAME;

if(_codec->capabilities&CODEC_FLAG_LOW_DELAY){
    _codecCtx->flags|=CODEC_FLAG_LOW_DELAY;
}


    



    I wasn't able to continue decoding new frames after video resolution change. The got_picture_ptr flag that avcodec_decode_video2 enables when whole frame is available was never true after that.
    
This ticket made me wonder if the issue isn't connected with multithreading. Only useful thing I've noticed is that when I change thread_type to FF_THREAD_SLICE the decoder is not always blocked after resolution change, about half of my attempts were successfull. Switching to single-threaded processing is not possible, I need more computing power. Setting up the context to one thread does not solve the problem and makes the decoder not keeping up with processing incoming data.
    
Everything work well after app restart.

    



    I can only think of one workoround (it doesn't really solve the problem) : unloading and loading the whole library after stream resolution change (e.g as mentioned in here). I don't think it's good tho, it will propably introduce other bugs and take a lot of time (from user's viewpoint).

    



    Is it possible to fix this issue ?

    



    EDIT :
    
I've dumped the stream data that is passed to decoding pipeline. I've changed the resolution few times while stream was being captured. Playing it with ffplay showed that in moment when resolution changed and preview in application froze, ffplay managed to continue, but preview is glitchy for a second or so. You can see full ffplay log here. In this case video preview stopped when I changed resolution to 960x720 for the second time. (Reinit context to 960x720, pix_fmt: yuv420p in log).