Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (96)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

Sur d’autres sites (13587)

  • Separate simultaneously changing regions of video into individual videos

    17 juillet 2019, par Elle Fie

    Given a single video stream (up to 4K resolution), where only small displayed portions may change, I’d like to identify these changing sections and create separate video streams, one for each changing section of the input video stream, in real time.

    Note that this is spatial extraction, not time slicing !

    Q1 : Is there a better name to address this process ?

    Q2 : Is this an already solved problem ?

    It seems ImageMagick’s Compare program supports diffing two images, which I can process to identify regions as coordinates for an ffmpeg crop (launched in parallel for each discovered diff region), but this method relies on having a PNG stream to avoid false positive diffs due to lossy encoding. Also, too slow to happen in real time.

    Q3 : Is there any way ffmpeg can dump out the causal regions influencing scene-change detection ?

  • ffmpeg NVENC Encoding with -flags:v +ildct shows "No NVENC capable devices found"

    29 mai 2020, par gaamaa

    So far I use NVENC with ffmpeg for all my encoding successfully. Today I got a new Zotac nVidia GEForce GTX 1660 6GB card.

    



    I get "No NVENC capable devices found" error from ffmpeg. Only if I use -flags:v +ildct flag. Without ildct flag no issues except the output is progressive.

    



    I 100% need Interlaced output and I tried most of the Interlaced flags like, -vf tinterlace=interleave_top,fieldorder=tff -x264opts tff=1 Nothing is giving me Interlaced output except -flags +ildct But with "Zotac nVidia GEForce GTX 1660" ffmpeg shows :

    



    No NVENC capable devices found


    



    I even tried all latest nVidia drivers. Nothing helped me.

    



    My pseudo ffmpeg command line is as below :

    



    ffmpeg -i SourceFile.mkv -codec:v h264_nvenc -preset:v slow -flags:v +ildct+cgop -s:v 1920x1080 -ac 2 -ar 48000 -codec:a mp2 -b:a 384k -r 25 -f mp4 -y NewFile.mp4


    



    Is it a bug with nVidia driver or ffmpeg ? Is latest nVidia Turing technology doesn't support Interlaced (very Bad) ?

    



    Could some one help me ?

    


  • Extracting each individual frame from an H264 stream for real-time analysis with OpenCV

    11 mars 2020, par exclmtnpt

    Problem Outline

    I have an h264 real-time video stream (I’ll call this "the stream") being captured in Process1. My goal is to extract each frame from the stream as it comes through and use Process2 to analyze it with OpenCV. (Process1 is nodejs, Process2 is Python)

    Things I’ve tried, and their failure modes :

    • Send the stream directly from one Process1 to Process2 over a named fifo pipe :

    I succeeded in directing the stream from Process1 into the pipe. However, in Process2 (which is Python) I could not (a) extract individual frames from the stream, and (b) convert any extracted data from h264 into an OpenCV format (e.g. JPEG, numpy array).

    I had hoped to use OpenCV’s VideoCapture() method, but it does not allow you to pass a FIFO pipe as an input. I was able to use VideoCapture by saving the h264 stream to a .h264 file, and then passing that as the file path. This doesn’t help me, because I need to do my analysis in real time (i.e. I can’t save the stream to a file before reading it in to OpenCV).

    • Pipe the stream from Process1 to FFMPEG, use FFMPEG to change the stream format from h264 to MJPEG, then pipe the output to Process2 :

    I attempted this using the command :

    cat pipeFromProcess1.fifo | ffmpeg -i pipe:0 -f h264 -f mjpeg pipe:1 | cat > pipeToProcess2.fifo

    The biggest issue with this approach is that FFMPEG takes inputs from Process1 until Process1 is killed, and only then does Process2 begin to receive the data.

    Additionally, on the Process2 side, I still don’t understand how to extract individual frames from the data coming over the pipe. I open the pipe for reading (as "f") and then execute data = f.readline(). The size of data varies drastically (some reads have length on the order of 100, others length on the order of 1,000). When I use f.read() instead of f.readline(), the length is much larger, on the order of 100,000.

    If I were to know that I was getting the correct size chunk of data, I would still not know how to transform it into an OpenCV-compatible array because I don’t understand the format it’s coming over in. It’s a string, but when I print it out it looks like this :

    ��_M 0A0����tQ,\%��e���f/�H�#Y�p�f#�Kus�} F����ʳa�G������+$x�%V�� }[����Wo �1’̶A���c����*�&=Z^�o’��Ͽ� SX-ԁ涶V&H|��$
     ��<�E�� ��>�����u���7�����cR� �f�=�9 ��fs�q�ڄߧ�9v�]�Ӷ���& gr]�n�IRܜ�檯����

    � ����+ �I��w�}� ��9�o��� �w��M�m���IJ ��� �m�=�Soՙ}S �>j �,�ƙ�’���tad =i ��WY�FeC֓z �2�g� ;EXX��S��Ҁ*, ���w� _|�&�y��H��=��)� ���Ɗ3@ �h���Ѻ�Ɋ��ZzR`��)�y�� c�ڋ.��v� !u���� �S�I#�$9R�Ԯ0py z ��8 #��A�q�� �͕� ijc �bp=��۹ c SqH

    Converting from base64 doesn’t seem to help. I also tried :

    array = np.fromstring(data, dtype=np.uint8)

    which does convert to an array, but not one of a size that makes sense based on the 640x368x3 dimensions of the frames I’m trying to decode.

    • Using decoders such as Broadway.js to convert the h264 stream :

    These seem to be focused on streaming to a website, and I did not have success trying to re-purpose them for my goal.

    Clarification about what I’m NOT trying to do :

    I’ve found many related questions about streaming h264 video to a website. This is a solved problem, but none of the solutions help me extract individual frames and put them in an OpenCV-compatible format.

    Also, I need to use the extracted frames in real time on a continual basis. So saving each frame as a .jpg is not helpful.

    System Specs

    Raspberry Pi 3 running Raspian Jessie

    Additional Detail

    I’ve tried to generalize the problem I’m having in my question. If it’s useful to know, Process1 is using the node-bebop package to pull down the h264 stream (using drone.getVideoStream()) from a Parrot Bebop 2.0. I tried using the other video stream available through node-bebop (getMjpegStream()). This worked, but was not nearly real-time ; I was getting very intermittent data streams. I’ve entered that specific problem as an Issue in the node-bebop repository.

    Thanks for reading ; I really appreciate any help anyone can give !