
Recherche avancée
Autres articles (80)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)
Sur d’autres sites (5506)
-
how to override_ffserver parameters with a ffmpeg command line ?
1er février 2017, par BepehoI’m designing a simple camera switching system with ffmpeg and ffserver.
I’m using ffmpeg to redirect one camera stream to a single ffserver.I’ve setup the ffserver to generate webm output :
<feed>
File /tmp/PREVIEW.ffm
FileMaxSize 1M
</feed>
<stream>
Feed PREVIEW.ffm
Format webm
AudioBitRate 64
AudioSampleRate 24000
AudioChannels 1
VideoCodec libvpx
VideoSize 640x380
VideoFrameRate 25
AVOptionVideo flags +global_header
AVOptionAudio flags +global_header
AVOptionVideo cpu-used 0
AVOptionVideo qmin 10
AVOptionVideo qmax 42
AVOptionVideo quality good
PreRoll 0
StartSendOnKey
VideoBitRate 1M
</stream>It works well when i feed it with a ffmpeg commmand line such as :
ffmpeg -re -rtbufsize 1500M -thread_queue_size 512 -rtsp_transport tcp -i rtsp://admin:admin@192.168.0.10:554 -f lavfi -i anullsrc http://localhost:8090/PREVIEW.ffm
Now I want to insert dynamic text within the generated video stream, so I’ve added to the previous command line a drawtext directive :
ffmpeg -re -rtbufsize 1500M -re -rtbufsize 1500M -thread_queue_size 512 -override_ffserver -rtsp_transport tcp -i rtsp://admin:admin@192.168.0.10:554 -f lavfi -i anullsrc -vf drawtext='fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf:textfile=mytextfile.txt:reload=1:fontcolor=white:fontsize=48:box=1:boxcolor=black@0.5:boxborderw=5:x=(w-text_w):y=(h-text_h)' http://localhost:8090/PREVIEW.ffm
But of course, it’s not working !
I suppose that it’s because ffserver commands ffmpeg stream generation directives.
So i’ve added a -override_ffserver parameter and audio+video generation parameters :
ffmpeg -re -rtbufsize 1500M -thread_queue_size 512 -rtsp_transport tcp -i rtsp://admin:admin@192.168.0.10:554 -f lavfi -i anullsrc -vf drawtext='fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf:textfile=mytextfile.txt:reload=1:fontcolor=white:fontsize=48:box=1:boxcolor=black@0.5:boxborderw=5:x=(w-text_w):y=(h-text_h)' -vcodec libvpx -flags:v +global_header -b:v 1M -acodec libopus -ac 1 -ar 24000 -flags:a +global_header -s 320x190 -r 25 -override_ffserver http://localhost:8090/PREVIEW.ffm
Still, the stream looks incorrect as ffplay logs lines such as :
vp8 @ 0000000004e88220] Unknown profile 4
vp8 @ 0000000004e88220] Header size larger than dataCan anyone give me an hint on how to solve this problem ?
Many Thanks.
EDIT :
I’ve found a workaround by piping the video+audio+text mix result to a second ffmpeg process.Hence, There’s no need to use the override_ffserver parameter anymore :
ffmpeg -y -re -rtbufsize 1500M -thread_queue_size 512 -rtsp_transport tcp -i rtsp://admin:admin@192.168.0.10:554 -f lavfi -i anullsrc -vf drawtext='fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf:textfile=mytextfile.txt:reload=1:fontcolor=white:fontsize=48:box=1:boxcolor=black@0.5:boxborderw=5:x=(w-text_w):y=(h-text_h)' -f nut pipe:1 | ffmpeg -i pipe:0 http://localhost:8090/PREVIEW.ffm
But, if anyone has a better, leaner and simpler solution, I’m for it.
-
How to save recorded ffmpeg webcam video to red5
20 juillet 2014, par user3451310im trying to make a webcam recording program that can be accessed remotely by other users. i completed the recording locally but when i try to save it to red5 rtmp ://localhost/oflaDemo/streams/output.flv, no output produced to the streams directory and i don’t know how to stream it from other users while recording. can someone help me on this problem ?tnx heres my code :
Thread webcam = new Thread()
public void run()String fileName = new SimpleDateFormat("yyyy-MM-dd-hhmm").format(new Date());
try {
OpenCVFrameGrabber grabber = new OpenCVFrameGrabber(0);
grabber.start();
opencv_core.IplImage grabbedImage = grabber.grab();
CanvasFrame canvasFrame = new CanvasFrame("Video recorder");
canvasFrame.setCanvasSize(grabbedImage.width(), grabbedImage.height());
grabber.setFrameRate(grabber.getFrameRate());
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder("rtmp://localhost/oflaDemo/streams/output.flv", 800, 600);
recorder.setFormat("flv");
recorder.setFrameRate(6);
recorder.setVideoBitrate(1024 * 1024);
recorder.start();
while (canvasFrame.isVisible() && (grabbedImage = grabber.grab()) != null) {
canvasFrame.showImage(grabbedImage);
recorder.record(grabbedImage);
}
recorder.stop();
grabber.stop();
canvasFrame.dispose();
recorder.record();
} catch (FrameGrabber.Exception ex) {
Logger.getLogger(web.class.getName()).log(Level.SEVERE, null, ex);
} catch (FrameRecorder.Exception ex) {
Logger.getLogger(web.class.getName()).log(Level.SEVERE, null, ex);
};
webcam.start() ;}
-
Unable to record mediasoup producer using FFmpeg on real server
30 novembre 2020, par Sarvesh PatilI have built a nice app in react native for audio calling, many thanks to MediaSoup !!


To take it to next level, I need to record some of my calls.
I used this tutorial for reference :
mediasoup recording demo


I followed the FFmpeg way and have reached a point where I have created a plainTransport with


router.createPlainTransport({
 // No RTP will be received from the remote side
 comedia: false,
 // FFmpeg and GStreamer don't support RTP/RTCP multiplexing ("a=rtcp-mux" in SDP)
 rtcpMux: false,
 listenIp: {ip:"0.0.0.0", announcedIp:"MY_PUBLIC_IP"},
 });




Then I connect to this transport :


rtpPlainTransport.connect({
 ip: 127.0.0.1,
 port: "port1",
 rtcpPort: "port2",
 });




My first doubt : is the ip address in .connect({}) parameters supplied above correct ?


Second, the FFMPEG command requires an SDP header. This is mine :


v=0
 o=- 0 0 IN IP4 127.0.0.1
 s=-
 c=IN IP4 127.0.0.1
 t=0 0
 m=audio port1 RTP/AVPF 111
 a=rtcp:port2
 a=rtpmap:111 opus/48000/2
 a=fmtp:111 minptime=10;useinbandfec=1



When I start recording, the FFMPEG process does not receive any data.
Moreover, on stopping, I get the following message




Output file is empty, nothing was encoded (check -ss / -t / -frames
parameters if used) Exiting normally, received signal 2. Recording
process exit, code : 255, signal : null




I was able to make the recording save on localhost with 127.0.0.1 when the server was itself running on localhost.


However, with my actual server hosted with Nginx, I'm not able to figure out what is going wrong.


I can see data being sent on my audio port :


1 0.000000000 127.0.0.1 → 127.0.0.1 UDP 117 10183 → 5004 Len=75
2 0.020787740 127.0.0.1 → 127.0.0.1 UDP 108 10183 → 5004 Len=66
3 0.043201757 127.0.0.1 → 127.0.0.1 UDP 118 10183 → 5004 Len=76



What do I do with FFmpeg so that it starts the recording !?


Can someone please help ?