Recherche avancée

Médias (1)

Mot : - Tags -/getid3

Autres articles (51)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (10119)

  • Reliable video decode-(heavy edit)-encode with Mediacodec

    27 mai 2017, par Jason M

    I am building a Android app that does heavy video processing. I have read some background and examples from bigflake as well as the official docs but did not get a final answer. Here is what I would like to do in my app :

    1. Decode a mp4 video into raw yuv frames ;
    2. Edit each frame with a lot of computation, including flipping and cropping, preferably with my existing native C++ ;
    3. Encode the raw frames into another video.

    I am using API22 for robustness since "All video codecs support flexible YUV 4:2:0 buffers since LOLLIPOP_MR1." Unfortunately, when I call

    encoder.getInputImage();

    instead of

    encoder.getInputBuffer();

    I get a null, similar to This post with no answer. Is this a common issue ? Do I have other options to decode-edit-encode a video without either rendering to a Surface or using FFMPEG, which is a headache to build and debug ?

  • Fetching movies' frames via ffmpeg and feed it to vlfeat's sift

    16 novembre 2012, par Karl

    I am going to develop a program that uses ffmpeg and vlfeat on a linux server.

    My task is simply : get some frames from a movie file and feed these frames to vlfeat's sift.

    I am reading through some documents regarding using ffmpeg in c development, mainly here and here. As stated in the site, "There is not much "web based" official documentation for using these libraries." And some tutorials there might be a little outdated. I also read around that the API might differ from version to version. So I would like to ask for the following :

    1. Is it safe to follow this tutorial for the current implementation ?

      • If so, given the impression from above, what are some of the things that should be change for the current implementation ? (currently I got ffmpeg-git-c995644)
      • If not, what are the functions to acquire the frames of the movie file in any format ?
    2. For vlfeat side, if I am to feed a movie's image frame from ffmpeg, what kind of conversion is required so that vlfeat's sift implementation can "digest" the movie's image frame ?

  • Gstreamer video increases latency with decresed FPS

    19 novembre 2024, par Ri Di

    I am using RPI 5 to stream the video :

    


    rpicam-vid -t 0 --camera 0 --nopreview --mode 2304:1296:10:P --codec yuv420 --width 640 --height 360 --framerate 10 --rotation 0 --autofocus-mode manual --inline --listen -o - | ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 640x360 -r 10 -i /dev/stdin -c:v libx264 -preset ultrafast -tune zerolatency -maxrate 300k -bufsize 50k -g 30000 -f mpegts tcp://192.168.0.147:1234


    


    View it with :

    


    gst-launch-1.0 -v tcpserversrc host=0.0.0.0 port=1234 ! queue ! tsdemux ! h264parse ! avdec_h264 ! videorate ! video/x-raw,framerate=10/1 ! videoconvert ! autovideosink sync=false


    


    Problem is that with 10 FPS I get around 2s of latency ! While 56 or 120 FPS results in below 300ms latency.

    


    Is the problem in sender or reader side ? Or both ?

    


    I am not planning to use the 10 FPS, its only for demonstration of problem. But I would like to get lower latency at 56 FPS - just like at 120 FPS (around 80-100 ms difference) or maybe even better, as it seems to get lower with higher FPS.

    


    Maybe there is some kind of buffering parameter which holds frames ?

    


    (of course, when testing with higher FPS I change both numbers in sender and the one in reader command. The camera is v3 RPI official)

    


    Also I'd like to mention that same thing happens with ffplay :

    


    ffplay -i -probesize 3000 tcp://0.0.0.0:1234/?listen