Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (37)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (5951)

  • FFMPEG spoils the extreme pixels when converting and cropping images [closed]

    20 décembre 2024, par Jacob

    I have a side image processing task in my work.
I am using the following command to translate a spherical panorama into a cubemap :

    


    Path/ffmpeg.exe -i "Path/PanoImageInput.png" -vf v360=equirect:c6x1:fin_pad=64:out_frot=002200,scale=-1:2048:flags=neighbor "Path/CubemapOutput.png"

    


    And then I slice it into separate images, 1 face for example :

    


    Path/ffmpeg.exe -i "Path/CubemapOutput.png" -vf crop=2048:2048:0:0 -sws_flags neighbor "Path/face_1.png"

    


    I get artifacts on both cubemap and images of this kind that really bother me :

    


    Cubemap image junction
The junction and part of the edge of the cubemap image

    


    The pixels at the junctions of the 6 images in cubemap and the outermost pixels of any images change their color slightly. This eventually leads to the appearance of visible seams in the scene assembled from the images.

    


    Is there any way I can get rid of them ?

    


    I have tried different interpolation methods in both functions : fast, bicubric, gauss, etc. It doesn't seem to have any effect.

    


    I also tried to crop a couple of pixels less, something like : crop=2040:2040:4:4

    


    I thought it was all because of cubemap and its distortions. Anyway, at the edges of the image, the extreme pixels change their hue.

    


    I also hoped that with the help of pad, during the conversion, I could somehow control the area where the pixels deteriorate, but it absolutely does not matter what I put there - fin_pad, fout_pad and any numbers for them, remove scale, set scale - neither the final image size changes, nor any extra fields appear. Most likely, I just don't understand what this function is really supposed to mean.

    


  • Android ffmpeg save and append h264 streamed videos

    8 octobre 2012, par Stefan Alexandru

    I need to save a video file generated by two video streams coming from two different sources. I'm using rtsp over tcp/ip, and the videos are encoded with h264.
    I need to first record the video from the first source and than continue with the second source.
    So what I tried was to declare two AVFormatContext instances, initialize both with avformat_open_input(&context, "rtsp://......",NULL,&options)
    and then read frames with av_read_frame(context,&packet)
    and write them in the video file av_write_frame(oc,&packet);
    It works fine saving the video from the first source, but if by example I saved y frames from the first context, when I try reading and saving the frames from the second context in the same file, for the first y frames I am tring to save, av_write_frame(oc,&packet2);
    would retun -22
    , and would not add the frame to the file.

    I think the problem is that the context variable remembers how many frames were read, and it gives every read packet an identification number, to make sure it isn't written twice. But when I'm using a new context those identification numbers reset, the AVOutputFormat or the AVFormatContext also retain the id of the package they are expecting to receive, and would not write anything until they receive a package with that id.
    Now I'm wondering how could I solve this inconvenience. I can't find any setter for that id, or any way to reuse the same context. I thought to modify the ffmpeg sources but they are pretty complex and I couldn't find what I was looking for.
    An alternative would be to save the two video in two different files but, I don't know how to append them afterwards, as ffmpeg can only append videos encoded with mpeg and rencoding the video isn't really an option, as it will take to much time. Also I couldn't find any other functional way to append two mp4 videos encoded with h264.

    I'll be happy to hear any kind of usable ideea to this problem.

  • FFMPEG vsync drop and regeneration [closed]

    11 avril, par Lhh92

    According to the ffmpeg documentation

    


    


    -vsync parameter

    


    Video sync method. For compatibility reasons old values can be specified as numbers. Newly added values will have to be
specified as strings always.

    


    drop

    


    As passthrough but destroys all timestamps, making the muxer
generate fresh timestamps based on frame-rate.

    


    


    It appears that the mpegts mux does not regenerate the timestamps correctly (PTS/DTS) ; however, piping the output after vsync drop to a second process as raw h264 does force mpegts to regenerate the PTS.

    


    Generate test stream

    


    ffmpeg -f lavfi -i testsrc=duration=20:size=1280x720:rate=50 -pix_fmt yuv420p -c:v libx264 -b:v 4000000 -x264-params ref=1:bframes=0:vbv-maxrate=4500:vbv-bufsize=4000:nal-hrd=cbr:aud=1:bframes=0:intra-refresh=1:keyint=30:min-keyint=30:scenecut=0 -f mpegts -muxrate 5985920 -pcr_period 20 video.ts -y


    


    Generate output ts that has correctly spaced PTS values

    


    ffmpeg -i video.ts -vsync drop -c:v copy -bsf:v h264_mp4toannexb -f h264   - | ffmpeg -fflags +igndts -fflags +nofillin -fflags +genpts -r 50 -i - -c:v copy -f mpegts -muxrate 5985920  video_all_pts_ok.ts -y


    


    Generate output ts where all PTS are zero

    


    ffmpeg -i video.ts -vsync drop -c:v copy -bsf:v h264_mp4toannexb -f mpegts - | ffmpeg -fflags +igndts -fflags +nofillin -fflags +genpts -r 50 -i - -c:v copy -f mpegts -muxrate 5985920 video_all_pts_zero.ts -y


    


    It appears that vsync drop does destroy them but the mpegts doesn't regenerate them ? Any ideas on what needs adding to get it to work as a single ffmpeg command ?

    


    Tested on both Linux and Windows with the same result