Recherche avancée

Médias (1)

Mot : - Tags -/publishing

Autres articles (75)

  • Activation de l’inscription des visiteurs

    12 avril 2011, par

    Il est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
    Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
    Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (9350)

  • FFMPEG API — How much do stream parameters change frame-to-frame ?

    8 septembre 2015, par kacey

    I’m trying to extract raw streams from devices and files using ffmpeg. I notice the crucial frame information (Video : width, height, pixel format, color space, Audio : sample format) is stored both in the AVCodecContext and in the AVFrame. This means I can access it prior to the stream playing and I can access it for every frame.

    How much do I need to account for these values changing frame-to-frame ? I found https://ffmpeg.org/doxygen/trunk/demuxing__decoding_8c_source.html#l00081 which indicates that at least width, height, and pixel format may change frame to frame.

    • Will the color space and sample format also change frame to frame ?
    • Will these changes be temporary (a single frame) or lasting (a significant block of frames) and is there any way to predict for this stream which behavior will occur ?
    • Is there a way to find the most descriptive attributes that this stream is possible of producing, such that I can scale all the lower-quality frames up, but not offer a result that is mindlessly higher-quality than the source, even if this is a device or a network stream where I cannot play all the frames in advance ?

    The fundamental question is : how do I resolve the flexibility of this API with the restriction that raw streams (my output) do not have any way of specifying a change of stream attributes mid-stream. I imagine I will need to either predict the most descriptive attributes to give the stream, or offer a new stream when the attributes change. Which choice to make depends on whether these values will change rapidly or stay relatively stable.

  • embed video stream with custom meta data

    15 mai 2022, par Sergey Kolesnik

    I have an optical system that provides a UDP video stream.

    


    From device specification FAQ :

    


    


    Both single metadata (KLV) stream and compressed video (H.264) with metadata (KLV) are available on Ethernet link. Compressed video and metadata are coupled in the same stream compliant with STANAG 4609 standard. Each encoded video stream is encapsulated with the associated metadata within an MPEG-TS single program stream over Ethernet UDP/IP/ The video and metadata are synchronized through the use of timestamps.

    


    


    Also there are other devices that provide data about the state of an aircraft (velocity, coords, etc). This data should be displayed on a client GUI display alongside with video. Of course it has to be synchronized with the current video frame.

    


    One of the approaches I though of is to embed this data into the video stream. But I am not sure if it is possible or should I use another (than UDP) protocol for this purpose.

    


    Is it possible/reasonable to use such approach ? Is ffmpeg library suitable in this case ?
If not, what are the other ways to synchronize data with a video frame.
Latency is crucial. Although bandwidth is limited to 2-5 Mbps.

    



    


    It seems to be possible using ffmpeg : AVPacket can be provided with additional data using function av_packet_add_side_data which takes a preallocated buffer, size and a type AVPacketSideDataType.
However, I am not sure for now, which enum value of AVPacketSideDataType can be used for custom user-provided binary data.

    


    Something similar that might be used for my needs :

    


    How do I encode KLV packets to an H.264 video using libav*

    


  • How to use prebuilt FFmpeg in Android Studio

    26 mai 2016, par vxh.viet

    I’m sure this is a very basic question but since this is the my first time messing around with the NDK, a lot of thing is still very unclear to me.

    Use case :

    • I’m trying to develop a video scrubbing feature so fast and accurate frame seeking is crucial. I’ve tried most of the available players out there but the performance is still not up to my demand. That’s why I’m going down the FFmpeg route.

    • Basically, what I’m looking for is FFmpeg input seeking. I’ve tried WrtingMinds’ ffmpeg-android-java. However it is a file based implementation which means the out.jpg need to be written to external memory and read back which has a big hit on performance (roughly 1000 milliseconds for 1 seek).

    • That’s why I’m trying to built my own FFmpeg player to do the input seeking in JNI and push back the byte[] to be displayed in Java.

    Question : After a lot of struggling with the NDK, I’ve managed to set it up and successfully calling the JNI method from my Java code. The structure is as below :

    MyApp
     -app
     -MyFFmpegPlayer
       -build
       -libs
       -src
         -main
           -java
             -com.example.myffmpegplayer
               +HelloJNI.java
           -jni
             +MyFFmpegPlayer.c

    After some fail attempt to build FFmpeg on Windows, I’ve decided to use WritingMinds prebuilt FFmpeg. However, after extraction they just come up as plain ffmpeg files (not .so file) so I don’t really know how to use these.

    It would be a great gratitude, if someone can just chime in and give me a good starting point for my next step.

    Thank you so much for your time.