Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP

Autres articles (60)

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (9166)

  • h264 lossless coding

    19 juillet 2022, par cloudraven

    Is it possible to do completely lossless encoding in h264 ? By lossless, I mean that if I feed it a series of frames and encode them, and then if I extract all the frames from the encoded video, I will get the exact same frames as in the input, pixel by pixel, frame by frame. Is that actually possible ?
Take this example :

    



    I generate a bunch of frames, then I encode the image sequence to an uncompressed AVI (with something like virtualdub), I then apply lossless h264 (the help files claim that setting —qp 0 makes lossless compression, but I am not sure if that means that there is no loss at any point of the process or that just the quantization is lossless). I can then extract the frames from the resulting h264 video with something like mplayer.

    



    I tried with Handbrake first, but it turns out it doesn't support lossless encoding. I tried x264 but it crashes. It may be because my source AVI file is in RGB colorspace instead of YV12. I don't know how to feed a series of YV12 bitmaps and in what format to x264 anyway, so I cannot even try.

    



    In summary what I want to know if that is there a way to go from

    



    Series of lossless bitmaps (in any colorspace) -> some transformation -> h264 encode -> h264 decode -> some transformation -> the original series of lossless bitmaps

    



    If there a way to achieve this ?

    



    EDIT : There is a VERY valid point about lossless H264 not making too much sense. I am well aware that there is no way I could tell (with just my eyes) the difference between and uncompressed clip and another compressed at a high rate in H264, but I don't think it is not without uses. For example, it may be useful for storing video for editing without taking huge amounts of space and not losing quality and spending too much encoding time every time the file is saved.

    



    UPDATE 2 : Now x264 doesn't crash. I can use as sources either avisynth or lossless yv12 lagarith (to avoid the colorspace compression warning). Howerver, even with —qp 0 and a rgb or yv12 source I still get some differences, minimal but present. This is troubling, because all the information I have found on lossless predictive coding (—qp 0) claims that the whole encoding should be lossless, but I am unable to verifiy this.

    


  • How to work with data received from streaming services in my Java application ?

    24 novembre 2020, par gabriel garcia

    I'm currently trying to develop an "streaming client" as a way to organize multiple stream services (twitch, yt, mitele...) in a single desktop application written in Java.

    


    It basically relies on streamlink (which relies in ffmpeg) thanks to all it's features so my project could be defined as a frontend for streamlink.

    


    Straight to the point, one of the features I'd like to add it is the option to programatically record streams in the background and showing this video stream to the user when it's requested. Since there's also the possibility that the user wants to watch the stream without recording it, I'm forced to work with all that byte-like data sent from those streaming sources.

    


    So, the problem is basically that I do not know much about video coding/decoding/muxing/demuxing nor video theory like container structure, video formats and such.

    


    But the idea is to work with all the data sent from the stream source (let's say twitch, for example), read this bytes (I'm not sure what kind of information is sent to the client nor format) from the java.lang.Process's stdout and then present it to the client.

    


    Here's another problem : I don't know how to play video streams in JavaFX and I don't think it's even supported right now. So I would have to extract each frame and sound associated from the stdout and show them to the user each time a new frame is received (oups, another problem since I don't know when does each frame starts/ends since I'm reading each stdout's line).

    


    As a summary :

    


      

    • What kind of data am I receiving from the streaming source ?
    • 


    • How can I know when does each frame starts/stops ?
    • 


    • How can I extract the image and sound from each frame ?
    • 


    


    I hope I'm not asking too much and that you could shed some light upon my darkness.

    


  • Unable to link against FFmpeg libaries

    28 octobre 2015, par Cody

    I tried to build this, but always got link-time error.

    #include <libavutil></libavutil>log.h>    
    int main(int argc, char *argv[])
    {
       ::av_log_set_flags(AV_LOG_SKIP_REPEATED);
       return 0;
    }

    My distro is Debian GNU/Linux 8 (jessie). The FFmpeg was built by myself, and the configure command was...

    $ ./configure --prefix=/usr/local --disable-static --enable-shared \
    > --extra-ldflags='-Wl,-rpath=/usr/local/lib'

    The link-error is as follows.

    $ g++ foo.cpp -D__STDC_CONSTANT_MACROS -Wall \
    > -Wl,-rpath=/usr/local/lib \
    > $(pkg-config --cflags --libs libavutil)
    /tmp/ccKzgEFb.o: In function `main':
    foo.cpp:(.text+0x17): undefined reference to `av_log_set_flags(int)'
    collect2: error: ld returned 1 exit status

    where the output of pkg-config is...

    $ pkg-config --cflags --libs libavutil
    -I/usr/local/include -L/usr/local/lib -lavutil

    The objdump shows that the shared object libavutil.so does have av_log_set_flogs inside.

    $ objdump --dynamic-syms /usr/local/lib/libavutil.so | grep 'av_log_set_flags'
    000260f0 g    DF .text  0000000a  LIBAVUTIL_54 av_log_set_flags

    Please note that the g++ command used to build the above application had a linker option -Wl,-rpath=/usr/local/lib, though it still doesn’t work. Also, I’ve tried to monitor with inotifywait if the other version provided by the distro were called. They were not, and the one being opened during execution of g++ was /usr/local/lib/libavutil.so.

    Summary :

    1. /usr/local/lib/libavutil.so does have the symbol.

    2. -rpath was used to force to link against the shared library.

    3. Why link-time error ? T_T

    Any suggestion or information would be highly appreciated ! Thanks !

    REEDIT : ffplay works fine and ldd shows it use /usr/local/lib/libavutil.so. So, the libraries seems not broken, and the problem becomes how to build my own codes to use the libraries.