Recherche avancée

Médias (91)

Autres articles (49)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • L’utiliser, en parler, le critiquer

    10 avril 2011

    La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
    Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
    Une liste de discussion est disponible pour tout échange entre utilisateurs.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10215)

  • AV Foundation or FFmpeg tutorials [closed]

    30 novembre 2012, par time

    I want to edit video on iPhone, so I search and found that there are two ways : AV Foundation and FFmpeg. The problem is that I am unable to find any tutorial to start working with them. Could someone please provide me with some links ?

  • -12909 error decoding h264 stream with intra-refresh

    2 juillet 2024, par ciclopez

    I'm making an iOS app that decodes an h264 stream using video-toolbox. I create the stream with ffmpeg on a PC and send it to an iPhone using RTP. It's working nicely when I use this command to create it :

    



    ffmpeg -y -f:v rawvideo -c:v rawvideo -s 1280x720 -pix_fmt bgra -r 30 -an -i - -pix_fmt yuv420p -c:v libx264 -tune zerolatency -preset fast -b:v 5M -refs 1 -g 30 -profile:v high -level 4.1 -f rtp rtp://192.168.1.100:5678


    



    The iPhone receives and displays all the frames. However, when I enable intra-refresh

    



    -intra-refresh 1


    



    decoding fails with error code -12909 (-8969 on simulator) when VTDecompressionSessionDecodeFrame() is called.

    



    I take care of processing UDP packets to extract NAL Units, so I triple checked this process and discarded a problem with this part of the code.

    



    I didn't find any info about Video-toolbox not supporting intra-refresh, so the question is, does Video-toolbox support intra-refresh ? and if it does, am I missing something in the ffmpeg side that makes the stream not supported by Video-toolbox ?
Do I have to add something to the CMVideoFormatDescriptionRef apart from creating it with SPS and PPS data using CMVideoFormatDescriptionCreateFromH264ParameterSets() ?

    


  • FFmpeg decoding H264

    25 septembre 2011, par Steve McFarlin

    I am decoding a H264 stream using FFmpeg on the iPhone. I know the H264 stream is valid and the SPS/PPS are correct as VLC, Quicktime, Flash all decode the stream properly. The issue I am having on the iPhone is best shown by this picture.

    enter image description here

    It is as if the motion vectors are being drawn. This picture was snapped while there was a lot of motion in the image. If the scene is static then there are dots in the corners. This always occurs with predictive frames. The blocky colors are also an issue.

    I have tried various build settings for FFmpeg such as turning off optimizations, asm, neon, and many other combinations. Nothing seems to alter the behavior of the decoder. I have also tried the Works with HTML, Love and Peace releases, and also the latest GIT sources. Is there maybe a setting I am missing, or maybe I have inadvertently enabled some debug setting in the decoder.

    Edit

    I am using sws_scale to convert the image to RGBA. I have tried various different pixel formats with the same results.

    sws_scale(convertCtx, (const uint8_t**)srcFrame->data, srcFrame->linesize, 0, codecCtx->height, dstFrame->data, dstFrame->linesize);

    I am using PIX_FMT_YUV420P as the source format when setting up my codec context.