Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (45)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (7628)

  • Add a side data type for audio service type.

    14 novembre 2014, par Anton Khirnov
    Add a side data type for audio service type.
    

    Currently, audio service type is a field in AVCodecContext. However,
    side data is more appropriate for this kind of information.

    • [DBH] doc/APIchanges
    • [DBH] libavcodec/avcodec.h
    • [DBH] libavcodec/utils.c
    • [DBH] libavcodec/version.h
    • [DBH] libavfilter/af_ashowinfo.c
    • [DBH] libavutil/frame.h
    • [DBH] libavutil/version.h
  • What google cloud service can be used to process files stored in Firebase Cloud Storage with FFmpeg ? [closed]

    1er mai 2021, par uponly

    I am building a ReactJs app and I am trying to figure out a way to process files (images, videos, and audio of any type) that are stored in my Firebase storage bucket using FFmpeg. Currently, I have set up the functionality for allowing the user to upload files to my storage bucket, and a corresponding URL link is stored in a document in Firestore.

    


    Ideally, I'd love to do this with Cloud Functions HTTP triggers because I have all of that setup already. It would be nice to just call an HTTP trigger to process the file after it has been uploaded. However, after a bit of research, my current understanding is to somehow deploy my app using a flexible Google App Engine environment because apparently, it is the only way to set a manual timeout in case I have to process a long, high-quality video, for example. Thus I wouldn't be able to use Cloud Functions because there is a very short timeout period which may lead to the files not being fully processed.

    


    Here is the user flow I am trying to achieve, which may help make things more clear :

    


      

    1. [Done] The user uploads a file (image, audio, or video) to Firebase cloud storage. A URL is also stored in their corresponding user document in Firestore.
    2. 


    3. [Here and the steps onward are what I am trying to achieve] After the file has been stored, I'd like to kick off some sort of function that grabs the newly stored file and begin to process it in the cloud.
    4. 


    5. Store the newly processed file back into the Cloud Storage bucket
    6. 


    7. Allow the user to preview the processed file (by streaming it ideally, if possible).
    8. 


    


    In steps 2 and onward, I am just generally confused about what Google service I should be using to process my file in the cloud with FFmpeg. As well as how I can connect it to my React app, client-side. If I have to go the Google App Engine route, how do I go about connecting app engine to my React App such that I don't have to build my app and deploy it, as my app is still in development ?

    


    This is not a coding question so I apologize if this is the wrong place to post in. I am new to all this, any and all help is greatly appreciated. Thank you.

    


  • FFmpeg h264_v4l2m2m encoder changing aspect ratio from 16:9 to 1:1 with black bars

    8 janvier, par LycoReco2007

    When switching from libx264 to h264_v4l2m2m encoder in FFmpeg for YouTube streaming, the output video's aspect ratio changes from 16:9 to 1:1 with black bars on the sides, despite keeping the same resolution settings.

    


    Original working command (with libx264) :

    


    ffmpeg -f v4l2 \
    -input_format yuyv422 \
    -video_size 1280x720 \
    -framerate 30 \
    -i /dev/video0 \
    -f lavfi \
    -i anullsrc=r=44100:cl=stereo \
    -c:v libx264 \
    -preset ultrafast \
    -tune zerolatency \
    -b:v 2500k \
    -c:a aac \
    -b:a 128k \
    -ar 44100 \
    -f flv rtmp://a.rtmp.youtube.com/live2/[STREAM-KEY]


    


    When I replaced libx264 with h264_v4lm2m, it always produce a square resolution, and it automatically adds black bars to the top and the bottom of the sides of the camera. I currently using a Rasberry Pi 4 model B, with a webcam that I believe supports the 16:9 ratio (I've verified using v4l2-ctl --list-formats-ext -d /dev/video0 command)

    


    I've tried the follows :

    


      

    • Adding -aspect 16:9 parameter in the ffmpeg command
    • 


    • Adding video filters such as -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1"
None of these give me the correct aspect ratio.
    • 


    


    How can I make the h264_v4l2m2m encoder maintain the original 16:9 aspect ratio without adding black bars ? Is this a known limitation of the encoder, or am I missing some required parameters ?