Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (77)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • L’espace de configuration de MediaSPIP

    29 novembre 2010, par

    L’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
    Il permet de configurer finement votre site.
    La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...)

Sur d’autres sites (10427)

  • avcodec/libvpxenc : add a way to explicitly set temporal layer id

    10 février 2020, par Wonkap Jang
    avcodec/libvpxenc : add a way to explicitly set temporal layer id
    

    In order for rate control to correctly allocate bitrate to each temporal
    layer, correct temporal layer id has to be set to each frame. This
    commit provides the ability to set correct temporal layer id for each
    frame.

    Signed-off-by : James Zern <jzern@google.com>

    • [DH] doc/encoders.texi
    • [DH] libavcodec/libvpxenc.c
    • [DH] libavcodec/version.h
  • Efficient real-time video stream processing and forwarding with RTMP servers

    19 mai 2023, par dumbQuestions

    I have a scenario where I need to retrieve a video stream from an RTMP server, apply image processing (specifically, adding blur to frames), and then forward the processed stream to another RTMP server (in this case, Twitch).

    &#xA;

    Currently, I'm using ffmpeg in conjunction with cv2 to retrieve and process the stream. However, this approach introduces significant lag when applying the blur. I'm seeking an alternative method that can achieve the desired result more efficiently. I did attempt to solely rely on ffmpeg for the entire process, but I couldn't find a way to selectively process frames based on a given condition and subsequently transmit only those processed frames.

    &#xA;

    Is there a more efficient approach or alternative solution that can address this issue and enable real-time video stream processing with minimal lag ?

    &#xA;

    Thanks in advance !

    &#xA;

    def forward_stream(server_url, stream_key, twitch_stream_key):&#xA;    get_ffmpeg_command = [...]&#xA;&#xA;    send_ffmpeg_command [...]&#xA;&#xA;    # Start get FFmpeg process&#xA;    read_process = subprocess.Popen(get_ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)&#xA;&#xA;    # Start send FFmpeg process&#xA;    send_process = send_process = subprocess.Popen(send_ffmpeg_command, stdin=subprocess.PIPE, stderr=subprocess.DEVNULL)&#xA;&#xA;    # Open video capture&#xA;    cap = cv2.VideoCapture(f&#x27;{server_url}&#x27;)&#xA;&#xA;    while True:&#xA;        # Read the frame&#xA;        ret, frame = cap.read()&#xA;        if ret:&#xA;            # Apply machine learning algorithm&#xA;            should_blur = machine_learning_algorithm(frame)&#xA;&#xA;            # Apply blur if necessary&#xA;            if machine_learning_algorithm(frame):&#xA;                frame = cv2.blur(frame, (25, 25))&#xA;&#xA;            # Write the frame to FFmpeg process&#xA;            send_process.stdin.write(frame.tobytes())&#xA;        else:&#xA;            break&#xA;&#xA;    # Release resources&#xA;    cap.release()&#xA;    read_process.stdin.close()&#xA;    read_process.wait()&#xA;&#xA;

    &#xA;

  • (FFmpeg) VP9 Vaapi encoding to a .mp4 or .webm container from given official ffmpeg example

    13 mai 2021, par User800222

    I'm trying to implement vp9 hardware acceleration encoding process. I followed ffmpeg offical github's example (Here -> vaapi_encode.c).

    &#xA;&#xA;

    But given example only save a .yuv file to .h264 file, I would like to save the frames to either .mp4 or .webm container. And having the ability to control the quality, and etc.

    &#xA;&#xA;

    I'm not reading frames from a file, I'm collecting frames from a live feed. When having full 5 secs of frames from the live feed, encode those frames using vp9_vaapi to a 5 secs .mp4 file.

    &#xA;&#xA;

    I'm able to save all the 5 secs frames from my live feed to a .mp4 or .webm file, but they couldn't be played correctly (more precisely : keep loading, and I open).

    &#xA;&#xA;

    The result from the official site's example :

    &#xA;&#xA;

    enter image description here

    &#xA;&#xA;

    The cpu encoded vp9 .mp4 file result :

    &#xA;&#xA;

    enter image description here

    &#xA;&#xA;

    Edit :&#xA;Result&#xA;enter image description here

    &#xA;