Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5272)

  • Ffmpeg unable to initialize swsContext for H264 frame data received through RTP Payload

    26 mars 2015, par Praveen

    I am trying to decode H264 video frames received through RTSP streaming.

    I followed this post : How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter

    I was able to identify start of the frame & end of the frame in RTP packets and reconstructed my Video Frame.

    But I didnt receive any SPS,PPS data from my RTSP session. I looked for string "sprop-parameter-sets" in my SDP(Session Description Protocol) and there was none.

    Reconstructing Video Frame from RTP Packets :

    Payload in the first RTP Packet goes like this : "1c 80 00 00 01 61 9a 03 03 6a 59 ff 97 e0 a9 f6"

    This says that its a fragmented data("1C") and start of the frame("80"). I copied the rest of the payload data(except the first 2 bytes "1C 80").

    Following RTP Packets have the "Payload" start with "1C 00" which is continuation of the frame data. I kept adding payload data(except the first 2 bytes "1C 00") into the byte buffer for all the following RTP Packets.

    When I get the RTP packet with payload starts with "1C 40", which is end of the frame, I copied the rest of the payload data(except the first 2 bytes "1C 40") of that RTP Packet into the byte buffer.

    Thus I reconstructed the Video Frame into the byte buffer.

    Then I prepended 4 bytes [0x00, 0x00 , 0x00, 0x01] to the byte buffer before sending to the decoder, because I didnt receive any SPS, PPS NAL bytes.

    When I send this byte buffer to the decoder, decoder fails when it tries to initialize sws Context.

    Am I sending the NAL bytes and video frame data correctly ?

  • When NVIDIA GPU is used for H265 video decoding acceleration, the video screen may skip forward frames, while AMD GPU does not [closed]

    16 août 2023, par uproar

    We are using NVIDIA GPUs (model : GTX1050TI) to accelerate H.265 video decoding and rendering, and DXVA2 hardware acceleration is used. The version of FFmpeg is 4.2.2.
If the video is played for a long time, there will be frame skipping forward (that is, the video frame of the first few seconds will appear when it is played in normal order), and it will always exist after it appears, and it will not heal itself.
Has anyone been in the same situation or can anyone help ?

    


    We checked the rendering and playback timing of each frame and found that no forward jump occurred. Therefore, the data decoded by the GPU may be incorrect.
We also used AMD GPUs for comparison tests, but this is not the case with AMD GPUs.

    


  • Convert Audio to Video (with Poster) and Convert Audio for HLS (Laravel-FFMpeg)

    4 octobre 2020, par m_zanjani

    I implemented a code with Laravel and used Laravel-FFMpeg for media processing.
I could not find the answers to my questions so I am asking them here and I hope someone can help me.

    


      

    • How can I convert an audio file (wav, mp3, etc.) to a video file
(mp4) using Laravel-FFmpeg or PHP-FFMpeg (with an image as a
poster) ?
    • 


    


    // something like this command
ffmpeg -loop 1 -i image.jpg -i audio.wav -c:v libx264 -tune stillimage 
-c:a aac -b:a 192k -pix_fmt yuv420p -shortest out.mp4


    


      

    • How can I convert an audio file (wav, mp3, etc.) to HLS using Laravel-FFmpeg or PHP-FFMpeg ?

        

      • I did this correctly for the video file and I had no problem but I
did not see a sample for the audio file.
      • 


      • Is it possible to stream an audio file and convert it to HLS (making ts and m3u8 files) ? I mean, do I need to create HLS files to stream audio files on players like Video.js ? Does streaming an audio file make sense ?
      • 


      • Do we need to convert the audio file to a video file first and then build the HLS, or can we build the HLS directly ?
      • 


      


    • 


    • With this package, can a watermark be placed on a photo ? Or wrote a text on it (drawtext command) ?
    • 


    • How to add album art (jpg) to audio file (mp3) using this package ?
    • 


    


    // something like this command
ffmpeg -i in.mp3 -i test.png -map 0:0 -map 1:0 -c copy -id3v2_version 3 -metadata:s:v 
title="Album cover" -metadata:s:v comment="Cover (front)" out.mp3


    


    If you provide a sample code of this package or PHP-FFMpeg for the above three questions, I would be grateful.

    


    Thank You