
Recherche avancée
Médias (1)
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (62)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
Sur d’autres sites (6459)
-
Extract Video Frames from SDP Output
1er août 2022, par user1446797Does anyone know how to extract image frames from a SDP video output ? I'm using a Nest battery camera. The wired version gave me an RTSP stream which was easy to extract frames. However, the battery version gave me a SDP output which is hard to make sense of. I've looked at a few posts on stackoverflow but none seemed too promising :




Executing FFmpeg recording using in-line SDP


Even being able to stream SDP to a mp4 file using ffplay would be a nice start. But ultimately I would like to run a python script to extract frames from SDP output.


I must admit, SDP (session description protocol) seems pretty long and complicated compared to working with RTSP streams. Anyway to simply convert an SDP stream to a RTSP stream ?


https://andrewjprokop.wordpress.com/2013/09/30/understanding-session-description-protocol-sdp/


Thanks !
Jacob


SDP output looks something like this :


v=0\r\no=- 0 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE 0 2 1\r\na=msid-semantic : WMS 16733765853514488918/633697675 virtual-6666\r\na=ice-lite\r\nm=audio 19305 UDP/TLS/RTP/SAVPF 111\r\nc=IN IP4 142.250.9.127\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=candidate : 1 udp 2113939711 2607:f8b0:4002:c11::7f 19305 typ host generation 0\r\na=candidate : 1 tcp 2113939710 2607:f8b0:4002:c11::7f 19305 typ host tcptype passive generation 0\r\na=candidate : 1 ssltcp 2113939709 2607:f8b0:4002:c11::7f 443 typ host generation 0\r\na=candidate : 1 udp 2113932031 142.250.9.127 19305 typ host generation 0\r\na=candidate : 1 tcp 2113932030 142.250.9.127 19305 typ host tcptype passive generation 0\r\na=candidate : 1 ssltcp 2113932029 142.250.9.127 443 typ host generation 0\r\na=ice-ufrag:UVDO0GOJASABT95E\r\na=ice-pwd:FRILJDCJZCH+51YNWDGZIN0K\r\na=fingerprint:sha-256 24:53:14:34:59:50:89:52:72:58:04:57:71:BB:C4:89:91:3A:52:EF:C0:5A:A5:EC:B5:51:64:80:AC:13:89:8A\r\na=setup:passive\r\na=mid:0\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=extmap:3 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=sendrecv\r\na=msid:virtual-6666 virtual-6666\r\na=rtcp-mux\r\na=rtpmap:111 opus/48000/2\r\na=rtcp-fb:111 transport-cc\r\na=fmtp:111 minptime=10 ;useinbandfec=1\r\na=ssrc:6666 cname:6666\r\nm=video 9 UDP/TLS/RTP/SAVPF 108 109\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:UVDO0GOJASABT95E\r\na=ice-pwd:FRILJDCJZCH+51YNWDGZIN0K\r\na=fingerprint:sha-256 24:53:14:34:59:50:89:52:72:58:04:57:71:BB:C4:89:91:3A:52:EF:C0:5A:A5:EC:B5:51:64:80:AC:13:89:8A\r\na=setup:passive\r\na=mid:1\r\na=extmap:2 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time\r\na=extmap:13 urn:3gpp:video-orientation\r\na=extmap:3 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=sendrecv\r\na=msid:16733765853514488918/633697675 16733765853514488918/633697675\r\na=rtcp-mux\r\na=rtpmap:108 H264/90000\r\na=rtcp-fb:108 transport-cc\r\na=rtcp-fb:108 ccm fir\r\na=rtcp-fb:108 nack\r\na=rtcp-fb:108 nack pli\r\na=rtcp-fb:108 goog-remb\r\na=fmtp:108 level-asymmetry-allowed=1 ;packetization-mode=1 ;profile-level-id=42e01f\r\na=rtpmap:109 rtx/90000\r\na=fmtp:109 apt=108\r\na=ssrc-group:FID 633697675 3798748564\r\na=ssrc:633697675 cname:633697675\r\na=ssrc:3798748564 cname:633697675\r\nm=application 9 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=ice-ufrag:UVDO0GOJASABT95E\r\na=ice-pwd:FRILJDCJZCH+51YNWDGZIN0K\r\na=fingerprint:sha-256 24:53:14:34:59:50:89:52:72:58:04:57:71:BB:C4:89:91:3A:52:EF:C0:5A:A5:EC:B5:51:64:80:AC:13:89:8A\r\na=setup:passive\r\na=mid:2\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n


-
How would I add an audio channel to an rtsp stream ?
9 septembre 2022, par PlayerWetGood companions. It turns out that my Raspberry does not give more than itself when it comes to joining video with audio at good quality and sending it by rtsp. But I think I could send the video in rtsp format and then the audio in mp3, but I would need to join it again on another computer (nas Debian) on my home lan where I have the Shinobi program (security camera manager) installed.


I would need something that can somehow grab the rtsp stream and another mp3 audio and merge them into a new rtsp stream. Is this possible ? or is it a crazy idea.


On the one hand I send this, which is the transmission of the camera by rtsp through v4l2rtspserver :


v4l2rtspserver -H 1080 -W 1920 -F 25 -P 8555 /dev/video0



And separately I send an audio in mp3 with sound from a usb microphone through ffmpeg :


ffmpeg -ac 1 -f alsa -i hw:1,0 -acodec libmp3lame -ab 32k -ac 1 -f rtp rtp://192.168.1.77:12348



My idea is to put both things together on a nas server in a new rtsp stream (or another idea).


But I don't know if with ffmpeg I can capture the video from an rtsp video stream and then be able to join it with the mp3 audio and form another new rtsp stream.


Merge these two streams into one and reassemble an rtsp :


rtsp://192.168.1.57:8555/unicast

rtp://192.168.1.77:12348



I have tried this way but it gives me an error :


ffmpeg \
 -i rtsp://192.168.1.57:8555/unicast \
 -i rtp://192.168.1.37:12348 \
 -acodec copy -vcodec libx264 \
 -f rtp_mpegts "rtp://192.168.1.77:5000?ttl=2"



Error :


[h264 @ 0x55ac6acaf4c0] non-existing PPS 0 referenced
 Last message repeated 1 times
[h264 @ 0x55ac6acaf4c0] decode_slice_header error
[h264 @ 0x55ac6acaf4c0] no frame!
[h264 @ 0x55ac6acaf4c0] non-existing PPS 0 referenced
 Last message repeated 1 times
[h264 @ 0x55ac6acaf4c0] decode_slice_header error
[h264 @ 0x55ac6acaf4c0] no frame!
[h264 @ 0x55ac6acaf4c0] non-existing PPS 0 referenced
 Last message repeated 1 times



What am I doing wrong ?


-
How to maximize ffmpeg crop and overlay thousand block in same time ?
22 juin 2022, par yuno sagaI try to encrypt a frame of video to random of 16x16 block. so the result will be like artifact video. but exactly it can be decode back. only the creation that know the decode algorithm. but my problem is ffmpeg encode so slow. 3 minutes video, 854x480 (480p) https://www.youtube.com/watch?v=dyRsYk0LyA8. this example result frame that have been filter https://i.ibb.co/0nvLzkK/output-9.jpg. each frame have 1589 block. how to speed up this things ? 3 minutes only 24 frame done. the vido have 5000 thousand frame, so for 3 minutes video it takes 10 hours. i dont know why ffmpeg only take my cpu usage 25%.


const { spawn } = require('child_process');
const fs = require('fs');

function shuffle(array) {
 let currentIndex = array.length, randomIndex;
 
 // While there remain elements to shuffle.
 while (currentIndex != 0) {
 
 // Pick a remaining element.
 randomIndex = Math.floor(Math.random() * currentIndex);
 currentIndex--;
 
 // And swap it with the current element.
 [array[currentIndex], array[randomIndex]] = [
 array[randomIndex], array[currentIndex]];
 }
 
 return array;
 }

function filter(width, height) {
 const sizeBlock = 16;
 let filterCommands = '';
 let totalBlock = 0;
 const widthLengthBlock = Math.floor(width / sizeBlock);
 const heightLengthBlock = Math.floor(height / sizeBlock);
 let info = [];

 for (let i=0; i < widthLengthBlock; i++) {
 for (let j=0; j < heightLengthBlock; j++) {
 const xPos = i*sizeBlock;
 const yPos = j*sizeBlock;
 filterCommands += `[0]crop=${sizeBlock}:${sizeBlock}:${(xPos)}:${(yPos)}[c${totalBlock}];`;

 info.push({
 id: totalBlock,
 x: xPos,
 y: yPos
 });

 totalBlock += 1;
 } 
 }

 info = shuffle(info);

 for (let i=0; i < info.length; i++) {
 if (i == 0) filterCommands += '[0]';
 if (i != 0) filterCommands += `[o${i}]`;

 filterCommands += `[c${i}]overlay=x=${info[i].x}:y=${info[i].y}`;

 if (i != (info.length - 1)) filterCommands += `[o${i+1}];`; 
 }

 return filterCommands;
}

const query = filter(854, 480);

fs.writeFileSync('filter.txt', query);

const task = spawn('ffmpeg', [
 '-i',
 'C:\\Software Development\\ffmpeg\\blackpink.mp4',
 '-filter_complex_script',
 'C:\\Software Development\\project\\filter.txt',
 '-c:v',
 'libx264',
 '-preset',
 'ultrafast',
 '-pix_fmt',
 'yuv420p',
 '-c:a',
 'libopus',
 '-progress',
 '-',
 'output.mp4',
 '-y'
], {
 cwd: 'C:\\Software Development\\ffmpeg'
});

task.stdout.on('data', data => { 
 console.log(data.toString())
})