Recherche avancée

Médias (1)

Mot : - Tags -/belgique

Autres articles (59)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (8002)

  • Compression Artifacts using sws_Scale() AVFrame YUV420p-> openCV Mat BGR24 and back

    8 septembre 2023, par Morph

    I transcode, using C++ and FFmpeg, an H264 video in an .mp4 container to H265 video in an .mp4 container. That works perfectly with crisp and clear images and encoding conversion confirmed by checking with FFprobe.

    


    Then, I call one extra function in between the end of the H264 decoding and before the start of the H265 encoding. At that point I have an allocated AVFrame* that I pass to that function as an argument.

    


    The function converts the AVFrame into an openCV cv::Mat and backwards. Technically that is the easy part, yet i encountered a compression artifact problem in the process of which i don't understand why it happens.

    


    The function code (including a workaround for the question that follows) is as follows :

    


    void modifyVideoFrame(AVFrame * frame)
{
    // STEP 1: WORKAROUND, overwriting AV_PIX_FMT_YUV420P BEFORE both sws_scale() functions below, solves "compression artifacts" problem;
    frame->format = AV_PIX_FMT_RGB24; 
        
    // STEP 2: Convert the FFmpeg AVFrame to an openCV cv::Mat (matrix) object.
    cv::Mat image(frame->height, frame->width, CV_8UC3);
    int clz = image.step1();

    SwsContext* context = sws_getContext(frame->width, frame->height, (AVPixelFormat)frame->format, frame->width, frame->height, AVPixelFormat::AV_PIX_FMT_BGR24, SWS_FAST_BILINEAR, NULL, NULL, NULL);
    sws_scale(context, frame->data, frame->linesize, 0, frame->height, &image.data, &clz);
    sws_freeContext(context);

    // STEP 3 : Change the pixels.
    if (false)
    {
        // TODO when "compression artifacts" problem with baseline YUV420p to BGR24 and back BGR24 to YUV420P is solved or explained and understood.
    }
    
    // UPDATE: Added VISUAL CHECK
    cv::imshow("Visual Check of Conversion AVFrame to cv:Map", image);
    cv::waitKey(20);

    // STEP 4: Convert the openCV Mat object back to the FFmpeg AVframe.
    clz = image.step1();
    context = sws_getContext(frame->width, frame->height, AVPixelFormat::AV_PIX_FMT_BGR24, frame->width, frame->height, (AVPixelFormat)frame->format, SWS_FAST_BILINEAR, NULL, NULL, NULL);
    sws_scale(context, &image.data, &clz, 0, frame->height, frame->data, frame->linesize);
    sws_freeContext(context);
}


    


    The code as shown, including the workaround, works perfectly but is NOT understood.

    


    Using FFprobe i established that the input pixel format is YUV420p which is indeed AV_PIX_FMT_YUV420p that is found in the frame format. If I convert it to BGR24 and back to YUV420p without the workaround in step 1, then i have slight compression artifacts but which are clearly visible when viewing with VLC. So there is a loss somewhere which is what I try to understand.

    


    However, when I use the workaround in step 1 then I obtain the exact same output as if this extra function wasn't called (that is crisp and clear H265 without compression artifacts). To be sure that the conversion took place i modified the red value (inside the part of the code that now says if(false) ), and i can indeed see the changes when playing the H265 output file with VLC.

    


    From that test it is clear that after the conversion of the input, data present in AVFrame, from YUV420P to cv::Map BGR24, all information and data needed to convert it back into the original YUV420P input data was available. Yet that is not what happens without the workaround, proven by the compression artifacts.

    


    I used the first 17 seconds of the movie clip "Charge" encoded in H264 and available on the 'Blender' website.

    


    Is there anyone that has some explanation or that can help me understand why the code WITHOUT the workaround does not nicely converts the input data forwards and then backwards back into the original input data.

    


    This is what i see :
enter image description here

    


    compared to what i see with work-around OR (update) Visual Check section (cv::imshow) IF part 4 of code is remarked :
enter image description here

    


    These are the FFmpeg StreamingParams that i used on input :

    


    copy_audio => 1
copy_video => 0
vid_codec => "libx265"
vid_video_codec_priv_key => "x265-params"
vid_codec_priv_value => "keyint=60:min-keyint=60:scenecut=0"

// Encoder output
x265 [info]: HEVC encoder version 3.5+98-753305aff
x265 [info]: build info [Windows][GCC 12.2.0][64 bit] 8bit+10bit+12bit
x265 [info]: using cpu capabilities: MMX2 SSE2Fast LZCNT SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
x265 [info]: Main profile, Level-3.1 (Main tier)
x265 [info]: Thread pool 0 using 64 threads on numa nodes 0
x265 [info]: Slices                              : 1
x265 [info]: frame threads / pool features       : 1 / wpp(12 rows)
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge         : hex / 57 / 2 / 2
x265 [info]: Lookahead / bframes / badapt        : 15 / 4 / 0
x265 [info]: b-pyramid / weightp / weightb       : 1 / 1 / 0
x265 [info]: References / ref-limit  cu / depth  : 3 / on / on
x265 [info]: AQ: mode / str / qg-size / cu-tree  : 2 / 1.0 / 32 / 1
x265 [info]: Rate Control / qCompress            : ABR-2000 kbps / 0.60
x265 [info]: VBV/HRD buffer / max-rate / init    : 4000 / 2000 / 0.750
x265 [info]: tools: rd=2 psy-rd=2.00 rskip mode=1 signhide tmvp fast-intra
x265 [info]: tools: strong-intra-smoothing lslices=4 deblock sao


    


  • Trying to upload a video to a server and then play it back to a video view (Xamarin android)

    31 juillet 2016, par stackOverNo

    I’m currently working on a xamarin.android project, and am attempting to upload a video to an aws server, and then also be able to play it back. The upload is working correctly as far as I can tell.

    I’m retrieving the file from the user’s phone, turning it into a byte array, and uploading that. This is the code to upload :

    if (isImageAttached || isVideoAttached)
               {
                   //upload the file
                   byte[] fileInfo = System.IO.File.ReadAllBytes(filePath);
                   Task<media> task = client.SaveMediaAsync(fileInfo, nameOfFile);
                   mediaObj = await task;

                   //other code below is irrelevant to example
               }
    </media>

    and SaveMediaAsync is a function I wrote in a PCL :

    public async Task<media> SaveMediaAsync(byte[] fileInfo, string fName)
       {        
           Media a = new Media();
           var uri = new Uri(RestUrl);

           try
           {

               MultipartFormDataContent form = new MultipartFormDataContent();
               form.Add(new StreamContent(new MemoryStream(fileInfo)), "file", fName);  //add file

               var response = await client.PostAsync(uri, form);            //post the form   client is an httpclient object
               string info = await response.Content.ReadAsStringAsync();

       //save info to media object
               string[] parts = info.Split('\"');
               a.Name = parts[3];
               a.Path = parts[7];
               a.Size = Int32.Parse(parts[10]);

           }
           catch(Exception ex)
           {
       //handle exception
           }

           return a;

       }
    </media>

    After uploading the video like that, I’m able to view it in a browser using the public url. The quality is the same, and there is no issue with lag or load time. However when I try to play back the video using the same public url on my app on an android device, it takes an unbelievably long time to load the video. Even once it is loaded, it plays less than a second of it, and then seems to start loading the video again(the part of the progress bar that shows how much of the video has loaded jumps back to the current position and starts loading again).

    VideoView myVideo = FindViewById<videoview>(Resource.Id.TestVideo);

    myVideo.SetVideoURI(Android.Net.Uri.Parse(url));

    //add media controller
    MediaController cont = new MediaController(this);
    cont.SetAnchorView(myVideo);
    myVideo.SetMediaController(cont);

    //start video
    myVideo.Start();
    </videoview>

    Now I’m trying to play a 15 second video that is 5.9mb. When I try to play a 5 second video that’s 375kb it plays with no issue. This leads me to believe I need to make the video file smaller before playing it back, but I’m not sure how to do that. I’m trying to allow the user to upload their own videos, so I’ll have all different file formats and sizes.

    I’ve seen some people suggesting ffmpeg for a c# library to alter video files, but I’m not quite sure what it is I need to do to the video file. Can anyone fill in the gaps in my knowledge here ?

    Thanks for your time, it’s greatly appreciated !

  • Revert "rtpenc_chain : Don’t copy the time_base back to the caller"

    29 mai 2014, par Martin Storsjö
    Revert "rtpenc_chain : Don’t copy the time_base back to the caller"
    

    While it strictly isn’t necessary to copy the time base (since
    any use of it is scaled in ff_write_chained), it still is better
    to signal the actual time base to the caller, avoiding one
    unnecessary rescaling. This also lets the caller know what the
    actual internal time base is, in case that is useful info
    for some caller.

    This reverts commit 397ffde115f4e0482a007b672f40457596cebfc4.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libavformat/rtpenc_chain.c