Recherche avancée

Médias (1)

Mot : - Tags -/remix

Autres articles (61)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (10397)

  • FFmpeg transcode GIF into Mp4 and Mp4 to AVI using GPU

    9 octobre 2023, par Cristian

    I'm trying to convert GIF animated to mp4 and mp4 to AVI with FFmpeg.

    


    I started to use just the CPU, but I have to process millions of GIFs/mp4 content pieces. So, I started to have a lot of errors processing them, and it ended as a bottleneck. Therefore, I'm trying to use GPU to process the videos.

    


    Converting GIF to mp4 with CPU, I run the following command :

    


    ffmpeg -i animated.gif -movflags faststart -pix_fmt yuv420p -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" video.mp4


    


    Using the GPU I'm trying the following :

    


    ffmpeg
  -y
  -hwaccel nvdec
  -hwaccel_output_format cuda
  -i gifInputPath
  -threads 1
  -filter_threads 1
  -c:v h264_nvenc
  -vf hwupload_cuda,scale_cuda=-2:320:240:format=yuv420p
  -gpu 0
   mp4VideoPath


    


    The above command generates an exit status 1.

    


    The following is the dmesg command log

    


    Converting mp4 videos to AVI videos I'm running the following command

    


    ffmpeg
-i videoInputPath
-vcodec rawvideo
-pix_fmt yuv420p
-acodec pcm_s16le
-ar 44100
-ac 2
-s 320x240
-r 4
-f avi
aviOutputVideoPath


    


    For GPU I tried :

    


    ffmpeg
 -y
 -hwaccel cuda
 -hwaccel_output_format cuda
 -i videoInputPath
 -threads 1
 -filter_threads 1
 -c:a pcm_s16le
 -ac 2
 -ar 44100
 -c:v h264_nvenc
 -vf hwupload_cudascale_cuda=-2:320:240:format=yuv420p
 -r 4
 -f avi
 -gpu 0
 aviOutputVideoPath


    


    The following is the dmseg output is log

    


      

    1. What should be the best command for converting the GIF into Mp4 and Mp4 into AVI based on CPU configuration using the GPU(Amazon Nvidia t4) for best performance, low CPU, and moderated GPU consumption ?

      


    2. 


    3. What are the best suggestions to Process these content pieces concurrently using GPU ?

      


    4. 


    


    Note : I'm using Golang to execute the FFmpeg commands.

    


  • swscale/aarch64 : use multiply accumulate and increase vector factor to 4

    17 novembre 2019, par Sebastian Pop
    swscale/aarch64 : use multiply accumulate and increase vector factor to 4
    

    This patch implements ff_hscale_8_to_15_neon with NEON fused multiply accumulate
    and bumps the vectorization factor from 2 to 4.
    The speedup is of 25% on Graviton1 A1 instances based on A-72 cpus :

    $ ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf bench=start,scale=1024x1024,bench=stop -f null -
    before : t:0.040303 avg:0.040287 max:0.040371 min:0.039214
    after : t:0.032168 avg:0.032215 max:0.033081 min:0.032146

    The speedup is of 39% on Graviton2 m6g instances based on Neoverse-N1 cpus :
    $ ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf bench=start,scale=1024x1024,bench=stop -f null -
    before : t:0.019446 avg:0.019423 max:0.019493 min:0.019181
    after : t:0.014015 avg:0.014096 max:0.015018 min:0.013971

    Tested with `make check` on aarch64-linux.

    Signed-off-by : Sebastian Pop <spop@amazon.com>
    Reviewed-by : Jean-Baptiste Kempf <jb@videolan.org>
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libswscale/aarch64/hscale.S
  • Trying to grab video stream from a 802W device

    1er juin 2015, par brentil

    A group of us in the RC hobby forums had started trying to use a device called the 802W, it takes RCA in and then broadcasts it back out over a WiFi you connect to via an Android or iOS device. They’re typically used for backup camera addon systems for vehicles. We want to use it to do FPV (First Person Video/View) with using smartphones instead of buying more expensive FPV goggles.

    802W device example (plenty of clones online)

    http://www.amazon.com/Wireless-Backup-Camera-Transmitter-Android/dp/B00LJPTJSY

    The problem is you can only use their application WIFI_AVIN or WIFI_AVIN2 from the app stores to connect to it because they don’t publish the information about how to grab the stream data. We want to write our own apps that can use the stream to better show the information. We’ve tried using VLC to grab the stream from an Android phone or a Windows PC but we’ve had no success so far. I was hoping someone could look at the Wireshark outputs and might understand what they’re looking at better than I am. I "think" it’s a UDP multicast being broadcasted but I just don’t know enough to be sure. We’ve tried using VLC to connect to network streams directly on the device or from udp ://@ type addresses but I think part of the issue too might be we’re missing the file path of the stream file.

    Attempting to reverse engineer their code for learning purposes showed that ffmpeg is inside a compiled .so library which also seems to be where the actual connection code happens which we were unable to dig into.

    In the images 192.168.72.33 is my phone and 192.168.72.173 is the 802W device.

    Image of what I believe is a UDP broadcast of the video information.
    Image of what I believe is a UDP broadcast of the video information.

    This is what the stream turns into when the device connects using the WIFI_AVIN application.
    This is what the stream turns into when the device connects using the WIFI_AVIN application.