
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (38)
-
L’espace de configuration de MediaSPIP
29 novembre 2010, parL’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
Il permet de configurer finement votre site.
La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (4161)
-
Nodejs spawn - child processes
24 février 2019, par z EyelandMy nodejs application is running on raspberry Pi. The program uses spawn child process to trigger bash scripts which record or compress the previous recorded file. The program records fine. After recording the user selects the compress button. This kills the recording spawn process and fires the bash script for compression. The issue I am having is that whenever the video length gets around 1min long the compression spawn process times out or something. I ran ps -ef to view all process and i noticed that the compression script is still there. She the video length is short - the compression spawn process completes its cycle and send api request to shutdown the process. Here is some code
Api with console log that lets me know when compression is done. When the video clips are longer around 1min this GET request never logs’=
app.get('/clovis/api/led', function (req, res){
console.log("api activated for resetLED function");
resetLED();
console.log("reset was completed");
})Nodejs spawns that call different bash scripts
function setBashScript(scriptNum){
if(scriptNum == 0){
//do this
child = spawn('./feedmpeg.sh');
resetLED();
}
if(scriptNum == 1){
//do this
updatePicturePath();
child = spawn('./feedSnapshot.sh',[pictureFilePath]);
resetLED();
}
if(scriptNum == 2){
//do this
updateVideoPath();
child = spawn('./feedmpegRecord.sh',[videoFilePath]);
isRecording = true;
resetLED();
ledRed();
}
if(scriptNum == 10){
//do this
updateCompressedPath();
child = spawn('./generalCompressionMP4.sh',[videoFilePath,compressedFilePath]);
isRecording = false;
resetLED();
ledBlue();
}
}generalCompressionMP4.sh - The spawn process that doesnt complete if video length is too long. Generally when the process is complete, I head over to the localhost server and view the mp4 file. When this issue occures the mp4 doesnt load, the old path is not removed, and the api doesnt send.
#!/bin/bash
FILEPATH="$1"
COMPRESSIONPATH="$2"
ffmpeg -i $FILEPATH -vcodec h264 -acodec mp2 $COMPRESSIONPATH
sudo rm $FILEPATH
curl -H "Content-Type: application/json" -X POST
http://localhost:3000/clovis/api/ledWhy might the process get stuck ? How can i fix this ? Should I not use spawn child process ?
-
FFMPEG does HTTP range request only once (HTML5 MediaStream/WebM)
17 décembre 2018, par Youssef GamilThe Issue
I am trying to pass HTML5
MediaStream
coming from a browser’s webcam to a Socket.IO namespace which is buffered and be accessible to FFMPEG through HTTP endpoint in form of HTTP partial/range requests. Although, it works on perfectly on Firefox/Chrome.Here, I have wrapped a sample and pushed it to a GitHub repo
Environment
- FireFox or Chrome (but not Safari)
- Node & NPM
Install
npm i
from the project directory
Run
npm start
: to start the express server- Open
http://localhost:3000
in Firefox or Chrome, and accept the webcam access dialog. npm run ffmpeg -i http://localhost:3000/view output.webm
: TODO : this doesn’t output the full streamhttp://localhost:3000/view
will be visible once it was opened in Firefox/Chrome and they’ll keep requesting till the end of stream or server down.- note,
npm run ffmpeg ...
is a just a shortcut to ffmpeg executable dependency which should be installed at startup innode_modules/@ffmpeg-installer/linux-x64/ffmpeg
(which may differ depending on your OS)
FFMPEG Log
What I get after running
ffmpeg -v trace -i http://localhost:3000/view -y tmp.mp4
:Reading option '-v' ... matched as option 'v' (set logging level) with argument 'trace'.
Reading option '-i' ... matched as input url with argument 'http://localhost:3000/view'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option 'tmp.mp4' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option v (set logging level) with argument trace.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input url http://localhost:3000/view.
Successfully parsed a group of options.
Opening an input file: http://localhost:3000/view.
[NULL @ 0x65f1c40] Opening 'http://localhost:3000/view' for reading
[http @ 0x65f2540] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
[tcp @ 0x65f4000] Original list of addresses:
[tcp @ 0x65f4000] Address 127.0.0.1 port 3000
[tcp @ 0x65f4000] Interleaved list of addresses:
[tcp @ 0x65f4000] Address 127.0.0.1 port 3000
[tcp @ 0x65f4000] Starting connection attempt to 127.0.0.1 port 3000
[tcp @ 0x65f4000] Successfully connected to 127.0.0.1 port 3000
[http @ 0x65f2540] request: GET /view HTTP/1.1
User-Agent: Lavf/58.18.104
Accept: */*
Range: bytes=0-
Connection: close
Host: localhost:3000
Icy-MetaData: 1
[http @ 0x65f2540] header='HTTP/1.1 206 Partial Content'
[http @ 0x65f2540] http_code=206
[http @ 0x65f2540] header='X-Powered-By: Express'
[http @ 0x65f2540] header='Access-Control-Allow-Origin: *'
[http @ 0x65f2540] header='Content-Range: bytes 0-1391616/1073741824'
[http @ 0x65f2540] header='Accept-Ranges: bytes'
[http @ 0x65f2540] header='Content-Length: 1391616'
[http @ 0x65f2540] header='Connection: close'
[http @ 0x65f2540] header='Content-Type: video/webm'
[http @ 0x65f2540] header='Date: Mon, 17 Dec 2018 10:15:14 GMT'
[http @ 0x65f2540] header=''
Probing matroska,webm score:100 size:2048
Probing mp3 score:1 size:2048
[matroska,webm @ 0x65f1c40] Format matroska,webm probed with size=2048 and score=100
st:0 removing common factor 1000000 from timebase
st:1 removing common factor 1000000 from timebase
[matroska,webm @ 0x65f1c40] Before avformat_find_stream_info() pos: 317 bytes read:3837 seeks:0 nb_streams:2
[matroska,webm @ 0x65f1c40] All info found
[matroska,webm @ 0x65f1c40] rfps: 29.750000 0.016025
[matroska,webm @ 0x65f1c40] rfps: 29.833333 0.013167
[matroska,webm @ 0x65f1c40] rfps: 29.916667 0.012378
[matroska,webm @ 0x65f1c40] rfps: 30.000000 0.013659
[matroska,webm @ 0x65f1c40] rfps: 29.970030 0.012960
[matroska,webm @ 0x65f1c40] stream 0: start_time: 0.000 duration: -9223372036854776.000
[matroska,webm @ 0x65f1c40] stream 1: start_time: -0.001 duration: -9223372036854776.000
[matroska,webm @ 0x65f1c40] format: start_time: -0.001 duration: -9223372036854.775 bitrate=0 kb/s
[matroska,webm @ 0x65f1c40] After avformat_find_stream_info() pos: 467388 bytes read:469955 seeks:0 frames:115
Input #0, matroska,webm, from 'http://localhost:3000/view':
Metadata:
encoder : QTmuxingAppLibWebM-0.0.1
Duration: N/A, start: -0.001000, bitrate: N/A
Stream #0:0(eng), 41, 1/1000: Video: vp8, 1 reference frame, yuv420p(progressive), 640x480, 0/1, SAR 1:1 DAR 4:3, 1k tbr, 1k tbn, 1k tbc (default)
Stream #0:1(eng), 74, 1/1000: Audio: opus, 48000 Hz, stereo, fltp, delay 312 (default)
Successfully opened the file.
Parsing a group of options: output url tmp.mp4. -
Use libav to copy raw H264 stream as mp4 without codec
11 décembre 2018, par cKt 1010Since my platform didn’t include libx264 some I can’t use H264 codec in libav.
I know this question is similar to Raw H264 frames in mpegts container using libavcodec. But trust me, I test bob2’s method but didn’t work.
There are some problem here :- How to set PTS and DTS ?
If I use setting below which was bob2 told, Libav will print : "Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly"
packet.flags |= AV_PKT_FLAG_KEY;
packet.pts = packet.dts = 0;So should I use frame rate to calculate it manually?
- How to set PPS and SPS ?
bob2 didn’t told it in his code, but it seems we can’t skip this step. Someone told me that it should be set to extradata which is in AVCodecContext struct. But what is formate ? should it include H264 header ?
- Should we delete
0x00 0x00 0x00 0x01
header one by one ?
Seems we must delete H264 header for every H264 frame. But it cost time anyway. Should we must do it ?
My code is mess (I tried to many method, lost in it now...). I past it below, and hope not confuse you.
Init :
AVOutputFormat *fmt;
AVFormatContext *oc;
AVCodec *audio_codec = NULL, *video_codec = NULL;
Int32 ret;
assert(video_st != NULL);
assert(audio_st != NULL);
/* Initialize libavcodec, and register all codecs and formats. */
av_register_all();
/* allocate the output media context */
printf("MediaSave: save file to %s", pObj->filePath);
avformat_alloc_output_context2(&oc, NULL, NULL, pObj->filePath);
if (!oc) {
Vps_printf(
"Could not deduce output format from file extension: using MPEG.");
avformat_alloc_output_context2(&oc, NULL, "mpeg", pObj->filePath);
}
if (!oc)
return SYSTEM_LINK_STATUS_EFAIL;
pObj->oc = oc;
fmt = oc->oformat;
fmt->video_codec = AV_CODEC_ID_H264;
Vps_printf("MediaSave: codec is %s", fmt->name);
/* Add the video streams using the default format codecs
* and initialize the codecs. */
if ((fmt->video_codec != AV_CODEC_ID_NONE) &&
(pObj->formate_type & MEDIA_SAVE_TYPE_VIDEO)) {
add_stream(video_st, oc, &video_codec, fmt->video_codec);
//open_video(oc, video_codec, video_st);
pObj->video_st = video_st;
pObj->video_codec = video_codec;
}
if ((fmt->audio_codec != AV_CODEC_ID_NONE) &&
(pObj->formate_type & MEDIA_SAVE_TYPE_AUDIO)) {
add_stream(audio_st, oc, &audio_codec, fmt->audio_codec);
//open_audio(oc, audio_codec, audio_st);
pObj->audio_codec = audio_codec;
pObj->audio_st = audio_st;
}
/* open the output file, if needed */
if (!(fmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&oc->pb, pObj->filePath, AVIO_FLAG_WRITE);
if (ret < 0) {
Vps_printf("Could not open '%s': %s", pObj->filePath,
av_err2str(ret));
return SYSTEM_LINK_STATUS_EFAIL;
}
}Write h264
/* Write the stream header, if any. */
ret = avformat_write_header(oc, NULL);
int nOffset = 0;
int nPos =0;
uint8_t sps_pps[4] = { 0x00, 0x00, 0x00, 0x01 };
while(1)
{
AVFormatContext *oc = pObj->oc;
nPos = ReadOneNaluFromBuf(&naluUnit, bitstreamBuf->bufAddr + nOffset, bitstreamBuf->fillLength - nOffset);
if(naluUnit.type == 7 || naluUnit.type == 8) {
Vps_printf("Get type 7 or 8, Store it to extradata");
video_st->st->codec->extradata_size = naluUnit.size + sizeof(sps_pps);
video_st->st->codec->extradata = OSA_memAllocSR(OSA_HEAPID_DDR_CACHED_SR1, naluUnit.size + AV_INPUT_BUFFER_PADDING_SIZE, 32U);
memcpy(video_st->st->codec->extradata, sps_pps , sizeof(sps_pps));
memcpy(video_st->st->codec->extradata + sizeof(sps_pps), naluUnit.data, naluUnit.size);
break;
}
nOffset += nPos;
write_video_frame(oc, video_st, naluUnit);
if (nOffset >= bitstreamBuf->fillLength) {
FrameCounter++;
break;
}
}
static Int32 write_video_frame(AVFormatContext *oc, OutputStream *ost,
NaluUnit bitstreamBuf) {
Int32 ret;
static Int32 waitkey = 1, ptsInc = 0;
if (0 > ost->st->index) {
Vps_printf("Stream index less than 0");
return SYSTEM_LINK_STATUS_EFAIL;
}
AVStream *pst = oc->streams[ost->st->index];
// Init packet
AVPacket pkt;
av_init_packet(&pkt);
pkt.flags |= (0 >= getVopType(bitstreamBuf.data, bitstreamBuf.size))
? AV_PKT_FLAG_KEY
: 0;
pkt.stream_index = pst->index;
// Wait for key frame
if (waitkey) {
if (0 == (pkt.flags & AV_PKT_FLAG_KEY)){
return SYSTEM_LINK_STATUS_SOK;
}
else {
waitkey = 0;
Vps_printf("First frame");
}
}
pkt.pts = (ptsInc) * (90000 / STREAM_FRAME_RATE);
pkt.dts = (ptsInc) * (90000/STREAM_FRAME_RATE);
pkt.duration = 3000;
pkt.pos = -1;
ptsInc++;
ret = av_interleaved_write_frame(oc, &pkt);
if (ret < 0) {
Vps_printf("cannot write frame");
}
return SYSTEM_LINK_STATUS_SOK;}