
Recherche avancée
Autres articles (36)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)
Sur d’autres sites (5093)
-
atrac3 : fix error handling
9 juillet 2013, par Luca Barbato -
RTSP stream to ffmpeg problems
14 octobre 2022, par maeekI'm writing a web application for managing and viewing streams from ONVIF ip-cameras.

It's written in nodejs. The idea is to run a child process in node and pipe output to node, then send the buffer to client and render it on canvas. I have a working solution for sending data to client and rendering it on canvas using websockets but it only works on one of my cameras.

I own 2 IP cameras and both of them have rtsp server.

One of them(let's name it camX) kind of works with this ffmpeg command (sometimes it just stops, maybe due to packet losses) :

ffmpeg -rtsp_transport tcp -re -i -f mjpeg pipe:1



But the other one(camY) returns
Nonmatching transport in server reply
and exits.

I discovered that the camY transport is
unicast
but ffmpeg doesn't support this particular lower_transport as I read on ffmpeg forum.

So I started looking for a solution. My first idea was to use
openRTSP
which works fine with both streams.
I looked at the documentation and came up with this command :

openRTSP -4 -c | ffmpeg -re -i pipe:0 -f mjpeg pipe:1

-4
parameter returns stream to pipe in mp4 format

And here's another problem I ran into, ffmpeg returns :

[mov,mp4,m4a,3gp,3g2,mj2 @ 0x559a4b6ba900] moov atom not found 
pipe:0: Invalid data found when processing input



Is there any way to make this work ?
I tried various solutions I found, but none of them worked.


EDIT


As @Gyan suggested I used
-i
parameter instead of-4
but it didn't solve my problem.

My command :


openRTSP -V -i -c -K | ffmpeg -loglevel debug -re -i pipe:0 -f mjpeg pipe:1
 
Created receiver for "video/H264" subsession (client ports 49072-49073)
Setup "video/H264" subsession (client ports 49072-49073)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
Outputting to the file: "stdout"
[avi @ 0x5612944268c0] Format avi probed with size=2048 and score=100
[avi @ 0x56129442f7a0] use odml:1
Started playing session
Receiving streamed data (signal with "kill -HUP 15028" or "kill -USR1 15028" to terminate)...
^C
[AVIOContext @ 0x56129442f640] Statistics: 16904 bytes read, 0 seeks
pipe:0: Invalid data found when processing input



As you can see openRTSP command return err 29 but in meantime it outputs some data to pipe.

When I terminate the command ffmpeg shows that it read some data but couldn't process it.

Here's the function that produces that error :


void AVIFileSink::setWord(unsigned filePosn, unsigned size) {
 do {
 if (SeekFile64(fOutFid, filePosn, SEEK_SET) < 0) break;
 addWord(size);
 if (SeekFile64(fOutFid, 0, SEEK_END) < 0) break; // go back to where we were

 return;
 } while (0);

 // One of the SeekFile64()s failed, probable because we're not a seekable file
 envir() << "AVIFileSink::setWord(): SeekFile64 failed (err "
 << envir().getErrno() << ")\n";
}



In my opinion it looks like it won't be able to seek file because it's a stream not a static file.

Any suggestion for a workaround ?

-
Record video with Xvfb + FFmpeg using Selenium in headless mode
12 mars 2024, par ifdef14I am trying to record video using Selenium in headless mode. I am using Xvfb and FFmpeg bindings for Python. I've already tried :


import subprocess
import threading
import time

from chromedriver_py import binary_path
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from xvfbwrapper import Xvfb


def record_video(xvfb_width, xvfb_height, xvfb_screen_num):
 subprocess.call(
 [
 'ffmpeg',
 '-f',
 'x11grab',
 '-video_size',
 f'{xvfb_width}x{xvfb_height}',
 '-i',
 xvfb_screen_num,
 '-codec:v',
 'libx264',
 '-r',
 '12',
 'videos/video.mp4',
 ]
 )


with Xvfb() as xvfb:
 '''
 xvfb.xvfb_cmd[1]) returns scren num
 :217295622
 :319294854
 :
 '''
 xvfb_width, xvfb_height, xvfb_screen_num = xvfb.width, xvfb.height, xvfb.xvfb_cmd[1]
 thread = threading.Thread(target=record_video, args=(xvfb_width, xvfb_height, xvfb_screen_num))
 thread.start()
 opts = webdriver.ChromeOptions()
 opts.add_argument('--headless')
 try:
 driver = webdriver.Chrome(service=Service(executable_path=binary_path), options=opts)
 finally:
 driver.close()
 driver.quit()




As much as I understand
xvfb.xvfb_cmd[1]
returns an information about virtual display isn't it ? When I executed this script, I got the error message :

[x11grab @ 0x5e039cfe2280] Failed to query xcb pointer0.00 bitrate=N/A speed=N/A 
:1379911620: Generic error in an external library



I also tried to use the following commands :


xvfb-run --listen-tcp --server-num 1 --auth-file /tmp/xvfb.auth -s "-ac -screen 0 1920x1080x24" python main.py &


ffmpeg -f x11grab -video_size 1920x1080 -i :1 -codec:v libx264 -r 12 videos/video.mp4


In the commands above, there are used
xvfb-run --server-num 1
andffmpeg -i :1
, why ?

Overall, when Selenium is running in the headless mode what's going on behind the scenes ? Is it using virtual display ? If yes, how can I detect display id of this, etc. Am I on the right path ?


I am not using Docker or any kind of virtualization. All kind of tests are running on my local Ubuntu machine.