Recherche avancée

Médias (91)

Autres articles (64)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (6618)

  • How to do reprocessing H264 data to image/video

    3 octobre 2017, par somethingunexpected

    I have been working on a project which blows my mind.

    First of all, I have an IP camera that streams H264 data. I am able to stream it with its RTSP url via VLC/ffmpeg/python (rtsp ://ip:port/PSIA/Streaming/channels/2 ?videoCodecType=H.264). So, there is nothing to worry about streaming.

    However, on the project, camera is connected to the another system. This system gives the H264 data via ethernet to my PC, so camera is not connected directly to the PC.
    The system gives the data in every 10 ms at 1000 bytes. For example, if incoming data is 800 bytes, it adds 200 null bytes.

    What I can do is to extract these null bytes and take the raw data via a socket. For latency, I use thread module so there is no data leakage.

    What I gotta do is to turn these raw H264 data to an image sequence or a video that can be displayed via VLC, ffmpeg, Classic Media Player etc.

    I preferrebly want to write my code in Python and every operation has to be in real-time. So, any help would be appreciated.

  • FFmpeg AVFrame Audio Data Modification

    5 septembre 2016, par Arlind

    I’m trying to figure out how FFmpeg saves data in an AVFrame after the audio has been decoded.

    Basically, if I print the data in the AVFrame->data[] array I get a number of unsigned 8 bit integers that is the audio in raw format.

    From what I can understand from the FFmpeg doxygen, the format of the data is expressed in the enum AVSampleFormat and there are 2 main categories : interleaved and planar. In the interleaved type, the data is all kept in the first row of the AVFrame->data array with size AVFrame->linesize[0] while in the planar type each channel of the audio file is kept in a separate row of the AVFrame->data array and the arrays have as size AVFrame->linesize[0].

    Is there a guide/tutorial that explains what do the numbers in the array mean for each of the formats ?

  • Writing decoded YUV420P data into a file with FFmpeg ?

    9 mai 2016, par Sir DrinksCoffeeALot

    I’ve read frame which is encoded with H264, decoded it, and converted it to YUV420P and the data is stored in frameYUV420->data, (type of frame is AVFrame). I want to save that data into a file that can be displayed with GIMP for example.

    I know how to save RGB25 pixel format but i’m not quite sure how to do YUV420P. Though i know that Y component will take width x height , and Cb/Cr will take (width/2) x (height/2) amount of space needed to save the data. So i’m guessing i need to first write Y data, and after that i need to write Cb and Cr data. Does anyone have finished code that i could take a look at ?