Recherche avancée

Médias (91)

Autres articles (40)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (7342)

  • Basic "pass-through" use of FFmpegReader/FFmpegWriter in scikit-video

    6 février 2021, par JonathanZ supports MonicaC

    I am starting to use scikit-video and am having trouble writing files. I have reduced the problem to the simplest possible example

    


    vid_file = "6710185719062326259_stamp_25pct.mp4"
output_file = "out_temp3.mp4"
reader = skvideo.io.FFmpegReader(vid_file)
writer = skvideo.io.FFmpegWriter(output_file)
for frame in reader.nextFrame():
        writer.writeFrame(frame)
writer.close()


    


    I'm playing the files in VLC, and the vid_file is valid but the output file, though playable, is mostly big green blocks (though I can discern some details from the original video in it).

    


    My goal, or course, is to do "interesting" manipulations of the frame before I write it out, but I need to get the "no modifications" version working correctly first. I'm also going to be using this on large files, so the vread/vwrite functions that process an entire file at once are not appropriate.

    


    I'm guessing I need to set the appropriate values in the outputdict parameter for the FFmpegWriter, but there are so many that I don't know where to start. I have tried

    


    writer = skvideo.io.FFmpegWriter(output_file, outputdict={'-crf': '0', '-pix_fmt': 'rgb24'})


    


    (-crf 0 to suppress any compression, -pixfmt rgb24 as that's what FFmpegReader says it delivers by default, but these don't work either.

    


    Any ideas on how to make this work ?

    


    Here's the skvideo.io.ffprobe video information for the input file.

    


    {
    "@index": "0",
    "@codec_name": "h264",
    "@codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
    "@profile": "High",
    "@codec_type": "video",
    "@codec_time_base": "1/30",
    "@codec_tag_string": "avc1",
    "@codec_tag": "0x31637661",
    "@width": "480",
    "@height": "270",
    "@coded_width": "480",
    "@coded_height": "272",
    "@has_b_frames": "2",
    "@pix_fmt": "yuv420p",
    "@level": "21",
    "@chroma_location": "left",
    "@refs": "1",
    "@is_avc": "true",
    "@nal_length_size": "4",
    "@r_frame_rate": "15/1",
    "@avg_frame_rate": "15/1",
    "@time_base": "1/15360",
    "@start_pts": "0",
    "@start_time": "0.000000",
    "@duration_ts": "122880",
    "@duration": "8.000000",
    "@bit_rate": "183806",
    "@bits_per_raw_sample": "8",
    "@nb_frames": "120",
    "disposition": {
        "@default": "1",
        "@dub": "0",
        "@original": "0",
        "@comment": "0",
        "@lyrics": "0",
        "@karaoke": "0",
        "@forced": "0",
        "@hearing_impaired": "0",
        "@visual_impaired": "0",
        "@clean_effects": "0",
        "@attached_pic": "0",
        "@timed_thumbnails": "0"
    },
    "tag": [
        {
            "@key": "language",
            "@value": "und"
        },
        {
            "@key": "handler_name",
            "@value": "VideoHandler"
        }
    ]
}


    


    I will mention that when I ffprobe the output file the only differences I see are 1) the timing data is different, which isn't surprising, and 2) the output file has

    


        "@has_b_frames": "0",
    "@pix_fmt": "yuv444p",


    


    I'm pretty confident the reader is working okay, because if I write out the data with

    


    skimage.io.imsave('x.png', frame,  check_contrast=False)


    


    it looks good.

    


  • Why does ffmpeg could only record the first 100 frames of the animation ?

    9 février 2020, par jack fang

    I am using the following Python code to make an animation and want to save it as a video through FFmpeg (in PyCharm) :

    import numpy as np
    import matplotlib.pyplot as plt
    import matplotlib.animation as animation
    from matplotlib.animation import FFMpegWriter

    def func():
       for j in range(1, len(t)):
           time = j * 0.01
           print('time:{:2}'.format(time))
           yield time

    def animate(data):
       time = data
       ax2.plot(time, time, **{'marker':'o'})
       ax2.set_title('t = {:.2}'.format(time))
       return  ax2

    def init():
       ax2.plot(0, 0)
       return ax2

    dt = 0.01
    t = np.arange(0, 50, dt)

    fig2 = plt.figure()
    ax2 = fig2.add_subplot(111, autoscale_on=True)
    ax2.grid()

    ani = animation.FuncAnimation(fig2, animate, func, interval=dt*1000, blit=False, init_func=init, repeat=False)

    plt.rcParams['animation.ffmpeg_path'] = 'C:\Program Files\\ffmpeg\\bin\\ffmpeg.exe'
    writer = FFMpegWriter(fps=15, metadata=dict(artist='Me'), bitrate=1800)
    ani.save("movie.mp4", writer=writer)

    #plt.show()

    But when time reaches 1.0, the process stopped but it is supposed to stop when time reaches 50.0. The following picture shows when the process stopped. The PyCharm Run console
    I then check movie.mp4 and find that the video ends when time reaches 1.0.
    That is to say, only the first 100 frames of the animation were converted into the .mp4 file, so I was very confused where did the rest of the frames went ?

    I tried to run the code through windows cmd but got the same result.
    I then uncomment the line #plt.show() and found that the process stopped when time reaches 50.0 and the animation could be displayed properly but still only the first 100 frames was converted.

    I am now very confused about this problem and don’t know how to solve it. Appreciated for your help. :)

  • Can't download Accord NuGet in VS 2015 nor VS 2017

    26 juin 2017, par John Leone

    Here is the package,,, NuGet Accord.Video.FFMPEG

    Here is the error,,,
    Package restore failed. Rolling back package changes for ’EssentialTimeLapseVideo’.

    I used this NuGet in a Windows Form app I am working on, and had no issues. Then something came up with Time Lapse, and I wanted to use it in a UWP, but, for some reason it won’t install.

    I tried the mirror suggested, and got the following.
    enter image description here

    Any help would be appreciated. Thanks, John.