
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (28)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (5882)
-
How to deal with every frame of a video by use ffmpeg ?
7 février 2019, par candrwowI am using tensorflow to do video object segmentation,but now I only know how to do it on image(png,jpg),I want to do it on short time video(15 seconds),how to get frame by frame in sequence and processing between each frame ?
now I split mp4 to pngs by ffmpeg,then do segamentation for every pngs,finally compound pngs to mp4 and delete every pngs.split and compound like this :
ffmpeg -i video.mp4 -r 24 ./split/%03d.png
ffmpeg -f image2 -i imgFilePath -r 24 videoDesPathbut the solution is not good,it generate many imgs in disk and need more io operation,if process fail,many pngs may be unable to recycle,I want to find a solution like below :
- convert a video to stream(java,because I’m using on Android)
- read stream to get 1st frame,do object segamentation(this need 150ms 1s,must run in child thread) and write segamented frame to a new stream(called segamented stream).
- repeat the above steps,finally convert a segamented stream to video.
can you teach me how to convert video to stream and get frame from stream ?Thank you !
-
RTSP stream to ffmpeg problems
14 octobre 2022, par maeekI'm writing a web application for managing and viewing streams from ONVIF ip-cameras.

It's written in nodejs. The idea is to run a child process in node and pipe output to node, then send the buffer to client and render it on canvas. I have a working solution for sending data to client and rendering it on canvas using websockets but it only works on one of my cameras.

I own 2 IP cameras and both of them have rtsp server.

One of them(let's name it camX) kind of works with this ffmpeg command (sometimes it just stops, maybe due to packet losses) :

ffmpeg -rtsp_transport tcp -re -i -f mjpeg pipe:1



But the other one(camY) returns
Nonmatching transport in server reply
and exits.

I discovered that the camY transport is
unicast
but ffmpeg doesn't support this particular lower_transport as I read on ffmpeg forum.

So I started looking for a solution. My first idea was to use
openRTSP
which works fine with both streams.
I looked at the documentation and came up with this command :

openRTSP -4 -c | ffmpeg -re -i pipe:0 -f mjpeg pipe:1

-4
parameter returns stream to pipe in mp4 format

And here's another problem I ran into, ffmpeg returns :

[mov,mp4,m4a,3gp,3g2,mj2 @ 0x559a4b6ba900] moov atom not found 
pipe:0: Invalid data found when processing input



Is there any way to make this work ?
I tried various solutions I found, but none of them worked.


EDIT


As @Gyan suggested I used
-i
parameter instead of-4
but it didn't solve my problem.

My command :


openRTSP -V -i -c -K | ffmpeg -loglevel debug -re -i pipe:0 -f mjpeg pipe:1
 
Created receiver for "video/H264" subsession (client ports 49072-49073)
Setup "video/H264" subsession (client ports 49072-49073)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
Outputting to the file: "stdout"
[avi @ 0x5612944268c0] Format avi probed with size=2048 and score=100
[avi @ 0x56129442f7a0] use odml:1
Started playing session
Receiving streamed data (signal with "kill -HUP 15028" or "kill -USR1 15028" to terminate)...
^C
[AVIOContext @ 0x56129442f640] Statistics: 16904 bytes read, 0 seeks
pipe:0: Invalid data found when processing input



As you can see openRTSP command return err 29 but in meantime it outputs some data to pipe.

When I terminate the command ffmpeg shows that it read some data but couldn't process it.

Here's the function that produces that error :


void AVIFileSink::setWord(unsigned filePosn, unsigned size) {
 do {
 if (SeekFile64(fOutFid, filePosn, SEEK_SET) < 0) break;
 addWord(size);
 if (SeekFile64(fOutFid, 0, SEEK_END) < 0) break; // go back to where we were

 return;
 } while (0);

 // One of the SeekFile64()s failed, probable because we're not a seekable file
 envir() << "AVIFileSink::setWord(): SeekFile64 failed (err "
 << envir().getErrno() << ")\n";
}



In my opinion it looks like it won't be able to seek file because it's a stream not a static file.

Any suggestion for a workaround ?

-
ffmpeg not reading stdin fast enough
5 mars 2019, par Joshua WalshI have a NodeJS program which launches ffmpeg (with child_process) and then provides realtime video data via stdin, using the pipe protocol.
ffmpeg -nostdin -i pipe:0 -codec libx264 -preset veryfast -tune zerolatency -acodec aac -b:a 128k output/index.m3u8
ffmpeg transcodes the video into h264 and muxes it into an HLS live stream.
The issue that I have is that sometimes ffmpeg refuses to accept more input. The default behaviour of NodeJS is to buffer input until the child process can accept it, but after a while this causes my application to run out of memory.
I tried a naive solution where if ffmpeg wasn’t able to read the input (if
proc.stdin.write
returns false) I would start discarding data until thedrain
event was raised on the stream, but this unsurprisingly led to badly degraded video output, with terrible artifacting.The nature of the source of the data makes it impossible for me to block, my application has to deal with it in realtime.
ffmpeg is using only a fraction of the available resources on the system (35% CPU, 1% disk), so I’m not sure why it’s blocking stdin. If I specify a more demanding preset then it happily uses more CPU, so CPU speed shouldn’t be a limiting factor.
Does anyone know why ffmpeg would be blocking stdin ? Is there a way I can tell ffmpeg to drop frames if it starts falling behind ?