Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (111)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (10859)

  • SetInodeAttributes error when creating a file inside bucket

    19 août 2022, par Turgut

    I've made a C++ program that lives in gke and takes some videos as input using ffmpeg, then does something with that input using opengl,finally it encodes those edited videos as a single output. Normally the program works perfectly fine on my local machine, it encodes just as I want it to with no warnings whatsoever. But I want it to encode that video directly to the cloud using a gcsfuse bucket. I've succesfully mounted the bucket and it seems to create the file at the start of my programs run. But when the run is over it's suppose to finish the encoding and finilize the video file. However when it reaches the end it gives off this error on the terminal where I run the gcsfuse command :

    


    2022/08/19 21:38:15.477586 SetInodeAttributes: input/output error, SetMtime: UpdateObject: not retrying UpdateObject("c36c2633-d4ee-4d37-825f-88ae54b86100.mp4"): gcs.NotFoundError: googleapi: Error 404: No such object: development-videoo-storage1/c36c2633-d4ee-4d37-825f-88ae54b86100.mp4, notFound
fuse: 2022/08/19 21:38:15.477660 *fuseops.SetInodeAttributesOp error: input/output error
2022/08/19 21:38:15.637346 SetInodeAttributes: input/output error, SetMtime: UpdateObject: not retrying UpdateObject("c36c2633-d4ee-4d37-825f-88ae54b86100"): gcs.NotFoundError: googleapi: Error 404: No such object: development-videoo-storage1/c36c2633-d4ee-4d37-825f-88ae54b86100, notFound
fuse: 2022/08/19 21:38:15.637452 *fuseops.SetInodeAttributesOp error: input/output error
2022/08/19 21:38:15.769569 GetInodeAttributes: input/output error, clobbered: StatObject: not retrying StatObject("c36c2633-d4ee-4d37-825f-88ae54b86100.mp4"): gcs.NotFoundError: googleapi: Error 404: No such object: development-videoo-storage1/c36c2633-d4ee-4d37-825f-88ae54b86100.mp4, notFound
fuse: 2022/08/19 21:38:15.769659 *fuseops.GetInodeAttributesOp error: input/output error


    


    At the end I end up with a file with the same size as my desired output but with an invalid video with no frames in it.

    


    I'm using a service account to activate my bucket, I can read files just fine and my service account has every permission it needs, here is how I mount my bucket :

    


    GOOGLE_APPLICATION_CREDENTIALS=./service-account.json gcsfuse -o nonempty --foreground cloud-storage-name /media


    


    I'm using ubuntu 22.04

    


  • ffmpeg doesn't sometimes start reading RTSP stream from IP camera without error

    8 décembre 2022, par Petr Dub

    I've an IP camera, which provide RTSP stream. I want to read this stream, rotate it 90° and provide data for displaying video via IIS server. I have a working solution via ffmpeg, but it is not reliable. It ends after several minutes without error and it is not starting sometimes.

    


    I use this command :&#xA;ffmpeg.exe -thread_queue_size 128 -i "rtsp://<login>:<pwd>@192.168.0.201:554/live/ch0" -vf "transpose=1" -y -c:a aac -b:a 160000 -ac 1 -s 432x768 -g 50 -hls_time 2 -hls_list_size 1 -start_number 1 -hls_flags delete_segments m:\playlist.m3u8</pwd></login>

    &#xA;

    It works, but after several minutes (40 to 100) it stops without error (I have no idea why). When I restart the same ffmeg command, it sometimes doesn’t start reading the stream, but it doesn’t produce any error, the ffmpeg process is still running. Error output from ffmeg shows :

    &#xA;

    ffmpeg version 2022-11-03-git-5ccd4d3060-full_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers&#xA;  built with gcc 12.1.0 (Rev2, Built by MSYS2 project)&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;  libavutil      57. 40.100 / 57. 40.100&#xA;  libavcodec     59. 51.101 / 59. 51.101&#xA;  libavformat    59. 34.101 / 59. 34.101&#xA;  libavdevice    59.  8.101 / 59.  8.101&#xA;  libavfilter     8. 49.101 /  8. 49.101&#xA;  libswscale      6.  8.112 /  6.  8.112&#xA;  libswresample   4.  9.100 /  4.  9.100&#xA;  libpostproc    56.  7.100 / 56.  7.100&#xA;

    &#xA;

    The next row should be :

    &#xA;

    Input #0, rtsp, from &#x27;rtsp://<login>:<pwd>@192.168.0.201:554/live/ch0&#x27;</pwd></login>

    &#xA;

    Any ideas, what should I do ? At least to force ffmpeg end with error and not stays “running”…

    &#xA;

    Edit : I finally wrote a windows service, which starts the ffmpeg and monitors its error output. When nothing comes to this output for several seconds, it kills the ffmpeg process and starts a new one.

    &#xA;

  • How to overlay sequence of frames on video using ffmpeg-python ?

    19 novembre 2022, par Yogesh Yadav

    I tried below but it is only showing the background video.

    &#xA;

    background_video = ffmpeg.input( "input.mp4")&#xA;overlay_video = ffmpeg.input(f&#x27;{frames_folder}*.png&#x27;, pattern_type=&#x27;glob&#x27;, framerate=25)&#xA;subprocess = ffmpeg.overlay(&#xA;                          background_video,&#xA;                          overlay_video,&#xA;                        ).filter("setsar", sar=1)&#xA;

    &#xA;

    I also tried to assemble sequence of frames into .webm/.mov video but transparency is lost. video is taking black as background.

    &#xA;

    P.s - frame size is same as background video size. So no scaling needed.

    &#xA;

    Edit

    &#xA;

    I tried @Rotem suggestions

    &#xA;

    &#xA;

    Try using single PNG image first

    &#xA;

    &#xA;

    overlay_video =  ffmpeg.input(&#x27;test-frame.png&#x27;)&#xA;

    &#xA;

    It's not working for frames generated by OpenCV but working for any other png image. This is weird, when I'm manually viewing these frames folder it's showing blank images(Link to my frames folder).&#xA;But If I convert these frames into the video(see below) it is showing correctly what I draw on each frame.

    &#xA;

    output_options = {&#xA;                    &#x27;crf&#x27;: 20,&#xA;                    &#x27;preset&#x27;: &#x27;slower&#x27;,&#xA;                    &#x27;movflags&#x27;: &#x27;faststart&#x27;,&#xA;                    &#x27;pix_fmt&#x27;: &#x27;yuv420p&#x27;&#xA;                }&#xA;ffmpeg.input(f&#x27;{frames_folder}*.png&#x27;, pattern_type=&#x27;glob&#x27;, framerate=25 , reinit_filter=0).output(&#xA;                    &#x27;movie.avi&#x27;,&#xA;                    **output_options&#xA;                ).global_args(&#x27;-report&#x27;).run()&#xA;

    &#xA;

    &#xA;

    try creating a video from all the PNG images without overlay

    &#xA;

    &#xA;

    It's working as expected only issue is transparency. Is there is way to create a transparent background video ? I tried .webm/.mov/.avi but no luck.

    &#xA;

    &#xA;

    Add .global_args('-report') and check the log file

    &#xA;

    &#xA;

    Report written to "ffmpeg-20221119-110731.log"&#xA;Log level: 48&#xA;ffmpeg version 5.1 Copyright (c) 2000-2022 the FFmpeg developers&#xA;  built with Apple clang version 13.1.6 (clang-1316.0.21.2.5)&#xA;  configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/5.1 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-neon&#xA;  libavutil      57. 28.100 / 57. 28.100&#xA;  libavcodec     59. 37.100 / 59. 37.100&#xA;  libavformat    59. 27.100 / 59. 27.100&#xA;  libavdevice    59.  7.100 / 59.  7.100&#xA;  libavfilter     8. 44.100 /  8. 44.100&#xA;  libswscale      6.  7.100 /  6.  7.100&#xA;  libswresample   4.  7.100 /  4.  7.100&#xA;  libpostproc    56.  6.100 / 56.  6.100&#xA;Input #0, image2, from &#x27;./frames/*.png&#x27;:&#xA;  Duration: 00:00:05.00, start: 0.000000, bitrate: N/A&#xA;  Stream #0:0: Video: png, rgba(pc), 1920x1080, 25 fps, 25 tbr, 25 tbn&#xA;Codec AVOption crf (Select the quality for constant quality mode) specified for output file #0 (movie.avi) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.&#xA;Codec AVOption preset (Configuration preset) specified for output file #0 (movie.avi) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (png (native) -> mpeg4 (native))&#xA;Press [q] to stop, [?] for help&#xA;Output #0, avi, to &#x27;movie.avi&#x27;:&#xA;  Metadata:&#xA;    ISFT            : Lavf59.27.100&#xA;  Stream #0:0: Video: mpeg4 (FMP4 / 0x34504D46), yuv420p(tv, progressive), 1920x1080, q=2-31, 200 kb/s, 25 fps, 25 tbn&#xA;    Metadata:&#xA;      encoder         : Lavc59.37.100 mpeg4&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A&#xA;frame=  125 fps= 85 q=31.0 Lsize=     491kB time=00:00:05.00 bitrate= 804.3kbits/s speed=3.39x    &#xA;video:482kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.772174%&#xA;

    &#xA;

    To draw frame I used below.

    &#xA;

    for i in range(num_frames):&#xA;            transparent_img = np.zeros((height, width, 4),  dtype=np.uint8)&#xA;            cv2.line(transparent_img, (x1,y1), (x2,y2) ,(255, 255, 255), thickness=1, lineType=cv2.LINE_AA)&#xA;            self.frames.append(transparent_img)&#xA;&#xA;&#xA;## To Save each frame of the video in the given folder&#xA;for i, f in enumerate(frames):&#xA;    cv2.imwrite("{}/{:0{n}d}.png".format(path_to_frames, i, n=num_digits), f)&#xA;&#xA;&#xA;&#xA;

    &#xA;