Recherche avancée

Médias (91)

Autres articles (75)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (13534)

  • Detect if an interlaced video frame is the Top or Bottom field ?

    21 décembre 2024, par Danny

    I'm decoding video PES packets (packetized elementary stream) containing H.264/AVC and H.265/HEVC using libavcodec like this :

    


    while (remainingESBytes > 0)
{
    int bytesUsed = av_parser_parse2(
            mpParser, mpDecContext,
            &mpEncPacket->data, &mpEncPacket->size,
            pIn, remainingESBytes,
            AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);

    // send encoded packet for decoding
    int ret = avcodec_send_packet(mpDecContext, mpEncPacket);
    if (ret < 0)
    {
        // failed
        continue;
    }

    while (ret >= 0)
    {
        ret = avcodec_receive_frame(mpDecContext, mpDecFrame);
        /// Do stuff with frame ///
    }

    remainingESBytes = getMoreBytes()
}


    


    Sometimes the input video is interlaced, in which case it seems avcodec_receive_frame is returning individual fields and not a merged frame of the top and bottom fields together.

    


    I couldn't find any way for avcodec_receive_frame to emit a full, non-interlaced frame.

    


    I can merge a top and bottom field together but I haven't found any way to identify if a given AVFrame is top or bottom.

    


    How can I do that ?

    


    EDIT I

    


    Looking at the log output from the decoder, it appears the decoder knows if the field is top or bottom (carried by SEI ?) but still can't figure out how to access that information via the libavcodec API...

    


    [hevc @ 0x1afcfc0] ENTER DECODE NAL TYPE 39. sei.ni_custom.type = -1
[hevc @ 0x1afcfc0] Set sei.ni_custom.type to -1.
[hevc @ 0x1afcfc0] ff_hevc_decode_nal_sei - s->ni_custom.type = -1
[hevc @ 0x1afcfc0] Decoding SEI [NAL Type 39]. ni_custom.type=-1
[hevc @ 0x1afcfc0] TOP Field
[hevc @ 0x1afcfc0] EXIT DECODE NAL TYPE 39. sei.ni_custom.type = -1


    


  • How to reduce usage of CPU/memory by ffmpeg ?

    10 octobre 2022, par Sanasar Yuzbashyan

    I know there are many questions about this topic, but I still haven't found what I'm looking for. I am using ffmpeg on the website to create a video from image+audio+waveform.
In the beginning my code used 100% of CPU to create video. After I started looking for a performance solution, I found several solutions and used them. I added the following commands
    
nice -19 cpulimit -l 30 -- ffmpeg -y -threads 1 -i ... after which everything worked very well and the CPU usage did not exceed 35-45%. But when I tried to run this command at the same time 4 times everything got really bad. Here is a picture of what happened.

    


    enter image description here

    


    This is my code

    


    nice -19 cpulimit -l 30 -- ffmpeg -y -i image.jpg -i audio.mp3 -filter_complex "[0:v]scale=1280x720[image]; [image]drawbox=x=0:y=720:w=1280:h=130:color=red@1:t=fill[img];[1:a]showwaves=s=1280x130:colors=green:mode=cline,format=yuva420p[wave];[img][wave]overlay=0:350[outv]" map 1:a -c:a copy -map "[outv]" -c:v libx264 -preset medium -threads 1 output.mp4

    


    And now I cannot understand how I can organize all this on a production server where 1000 users can simultaneously run this command when they create a video. I thought it would be better if a separate server was used for such resource-intensive processes so that the work of the site would not interrupt. But still, how can I organize this whole difficult process when I have a lot of users on my site, and they can simultaneously use this functionality ? Thank you in advance.

    


  • Low-latency RTSP stream using OpenCV and ffmpeg seems to ignore flags

    15 juillet 2024, par Dominic

    I'm writing a Qt6 application in C++ which should display and optionally store mutltiple RTSP streams. Currently, I'm using OpenCV to capture the stream :

    


    m_capture = new cv::VideoCapture("rtsp://user:password@192.168.1.108:504/stream", cv::CAP_FFMPEG);
if (!m_capture->isOpened()) {
    //Error
}

cv::Mat frame;
m_abort = false;
while(!m_abort) 
{
    if (!m_capture->read(frame)) 
    {
        //Error
        return;
    }

    doStuff(frame);
}



    


    This works, but has a delay of about 3 seconds for one of the streams.

    


    Using this ffplay command in the terminal works without noticable delay, though :

    


    ffplay -fflags nobuffer -flags low_delay -rtsp_transport tcp "rtsp://user:password@192.168.1.108:504/stream"


    


    I've tried to pass forward these flags without luck :

    


    setenv("OPENCV_FFMPEG_CAPTURE_OPTIONS","rtsp_transport;tcp|fflags;nobuffer|flags;low_delay",1);


    


    Other than that, I have tried (as suggetested e.g. here)

    


      

    • using QMediaPlayer
    • 


    • setting the cv::VideoCapture cv::CAP_PROP_BUFFERSIZE to 3
    • 


    • using grab() and retrieve() instead of read()
    • 


    


    TL ;DR : How can I disable buffering in the OpenCV FFMPEG backend for RTSP streams to reduce latency in my Qt application ?

    


    I'm also open to entirely different approaches (e.g. using QMultiMedia), but haven't found a way to reduce latency with those either.