
Recherche avancée
Médias (1)
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (64)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (7799)
-
flv created using ffmpeg library plays too fast
25 avril 2015, par Muhammad AliI am muxing an h264 annex-b stream and an ADTS AAC stream coming from IP Camera into an FLV. I have gone through all the necessary things (that I knew of) e-g stripping ADTS header from AAC and converting H264 annex-b to AVC.
I am able to create the flv file which plays but it plays fast. The params of my output format video codec are :-
Time base = 1/60000 <-- I don't know why
Bitrate = 591949 (591Kbps)
GOP Size = 12
FPS = 30 Fps (that's the rate encoder sends me data at)Params for output format audio codec are :-
Timebase = 1/44100
Bitrate = 45382 (45Kbps)
Sample rate = 48000I am using NO_PTS for both audio and video.
The resultant video has double the bit rate (2x(audio bitrate + vid bitrate)) and half the duration.
If i play the resultant video in ffplay the video playsback fast so it ends quickly but audio plays on its original time. So even after the video has ended quickly the audio still plays till its full duration.If I set pts and dts equal to an increasing index (separate indices for audio and video) the video plays Super fast, bit rate shoots to an insane value and video duration gets very short but audio plays fine and on time.
EDIT :
Duration: 00:00:09.96, start: 0.000000, bitrate: 1230 kb/s
Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709), 1280x720 [SAR 1:1 DAR 16:9], 591 kb/s, 30.33 fps, 59.94 tbr, 1k tbn, 59.94 tbc
Stream #0:1: Audio: aac, 48000 Hz, mono, fltp, 45 kb/sWhy is tbr 59.94 ? how was that calculated ? maybe that is the problem ?
Code for muxing :
if(packet.header.dataType == TRANSFER_PACKET_TYPE_H264)
{
if((packet.data[0] == 0x00) && (packet.data[1] == 0x00) && (packet.data[2]==0x00) && (packet.data[3]==0x01))
{
unsigned char tempCurrFrameLength[4];
unsigned int nal_unit_length;
unsigned char nal_unit_type;
unsigned int cursor = 0;
int size = packet.header.dataLen;
do {
av_init_packet(&pkt);
int currFrameLength = 0;
if((packet.header.frameType == TRANSFER_FRAME_IDR_VIDEO) || (packet.header.frameType == TRANSFER_FRAME_I_VIDEO))
{
//pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = packet.header.streamId;//0;//ost->st->index; //stream index 0 for vid : 1 for aud
outStreamIndex = outputVideoStreamIndex;
/*vDuration += (packet.header.dataPTS - lastvPts);
lastvPts = packet.header.dataPTS;
pkt.pts = pkt.dts= packet.header.dataPTS;*/
pkt.pts = pkt.dts = AV_NOPTS_VALUE;
if(framebuff != NULL)
{
//printf("Framebuff has mem alloc : freeing 1\n\n");
free(framebuff);
framebuff = NULL;
//printf("free successfully \n\n");
}
nal_unit_length = GetOneNalUnit(&nal_unit_type, packet.data + cursor/*pData+cursor*/, size-cursor);
if(nal_unit_length > 0 && nal_unit_type > 0)
{
}
else
{
printf("Fatal error : nal unit lenth wrong \n\n");
exit(0);
}
write_header_done = 1;
//#define _USE_SPS_PPS //comment this line to write everything on to the stream. SPS+PPSframeframe
#ifdef _USE_SPS_PPS
if (nal_unit_type == 0x07 /*NAL_SPS*/)
{ // write sps
printf("Got SPS \n");
if (_sps == NULL)
{
_sps_size = nal_unit_length -4;
_sps = new U8[_sps_size];
memcpy(_sps, packet.data+cursor+4, _sps_size); //exclude start code 0x00000001
}
}
else if (nal_unit_type == 0x08/*NAL_PPS*/)
{ // write pps
printf("Got PPS \n");
if (_pps == NULL)
{
_pps_size = nal_unit_length -4;
_pps = new U8[_pps_size];
memcpy(_pps, packet.data+cursor+4, _pps_size); //exclude start code 0x00000001
//out_stream->codec->extradata
//ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata
free(ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata);
ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata = (uint8_t*)av_mallocz(_sps_size + _pps_size);
memcpy(ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata,_sps,_sps_size);
memcpy(ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata + _sps_size,_pps,_pps_size);
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
//fprintf(stderr, "Error occurred when opening output file\n");
printf("Error occured when opening output \n");
exit(0);
}
write_header_done = 1;
printf("Done writing header \n");
}
}
//else
#endif /*end _USE_SPS_PPS */
{ //IDR Frame
videoPts++;
if( (nal_unit_type == 0x06) || (nal_unit_type == 0x09) || (nal_unit_type == 0x07) || (nal_unit_type == 0x08))
{
av_free_packet(&pkt);
cursor += nal_unit_length;
continue;
}
if( (nal_unit_type == 0x05) || (nal_unit_type == 0x05))
{
//videoPts++;
}
if ((nal_unit_type != 0x07) && (nal_unit_type != 0x08))
{
vDuration += (packet.header.dataPTS - lastvPts);
lastvPts = packet.header.dataPTS;
//pkt.pts = pkt.dts= packet.header.dataPTS;
pkt.pts = pkt.dts= AV_NOPTS_VALUE;//videoPts;
}
else
{
//probably sps pps ... no need to transmit. free the packet
//av_free_packet(&pkt);
pkt.pts = pkt.dts = AV_NOPTS_VALUE;
}
currFrameLength = nal_unit_length - 4;//packet.header.dataLen -4;
tempCurrFrameLength[3] = currFrameLength;
tempCurrFrameLength[2] = currFrameLength>>8;
tempCurrFrameLength[1] = currFrameLength>>16;
tempCurrFrameLength[0] = currFrameLength>>24;
if(nal_unit_type == 0x05)
{
pkt.flags |= AV_PKT_FLAG_KEY;
}
framebuff = (unsigned char *)malloc(sizeof(unsigned char)* /*packet.header.dataLen*/nal_unit_length );
if(framebuff == NULL)
{
printf("Failed to allocate memory for frame \n\n ");
exit(0);
}
memcpy(framebuff, tempCurrFrameLength,0x04);
//memcpy(&framebuff[4], &packet.data[4] , currFrameLength);
//put_buffer(pData + cursor + 4, nal_unit_length - 4);// save ES data
memcpy(framebuff+4,packet.data + cursor + 4, currFrameLength );
pkt.data = framebuff;
pkt.size = nal_unit_length;//packet.header.dataLen ;
//printf("\nPrinting Frame| Size: %d | NALU Lenght: %d | NALU: %02x \n",pkt.size,nal_unit_length ,nal_unit_type);
/* GET READY TO TRANSMIT THE packet */
//pkt.duration = vDuration;
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[outStreamIndex];
cn = out_stream->codec;
//av_packet_rescale_ts(&pkt, cn->time_base, out_stream->time_base);
//pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
pkt.stream_index = outStreamIndex;
if (!write_header_done)
{
}
else
{
//doxygen suggests i use av_write_frame if i am taking care of interleaving
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
//ret = av_write_frame(ofmt_ctx, &pkt);
if (ret < 0)
{
fprintf(stderr, "Error muxing Video packet\n");
continue;
}
}
/*for(int ii = 0; ii < pkt.size ; ii++)
printf("%02x ",framebuff[ii]);*/
av_free_packet(&pkt);
if(framebuff != NULL)
{
//printf("Framebuff has mem alloc : freeing 2\n\n");
free(framebuff);
framebuff = NULL;
//printf("Freeing successfully \n\n");
}
/* TRANSMIT DONE */
}
cursor += nal_unit_length;
}while(cursor < size);
}
else
{
printf("This is not annex B bitstream \n\n");
for(int ii = 0; ii < packet.header.dataLen ; ii++)
printf("%02x ",packet.data[ii]);
printf("\n\n");
exit(0);
}
//video frame has been parsed completely.
continue;
}
else if(packet.header.dataType == TRANSFER_PACKET_TYPE_AAC)
{
av_init_packet(&pkt);
pkt.flags = 1;
pkt.pts = audioPts*1024;
pkt.dts = audioPts*1024;
//pkt.duration = 1024;
pkt.stream_index = packet.header.streamId + 1;//1;//ost->st->index; //stream index 0 for vid : 1 for aud
outStreamIndex = outputAudioStreamIndex;
//aDuration += (packet.header.dataPTS - lastaPts);
//lastaPts = packet.header.dataPTS;
//NOTE: audio sync requires this value
pkt.pts = pkt.dts= AV_NOPTS_VALUE ;
//pkt.pts = pkt.dts=audioPts++;
pkt.data = (uint8_t *)packet.data;//raw_data;
pkt.size = packet.header.dataLen;
}
//packet.header.streamId
//now assigning pkt.data in repsective if statements above
//pkt.data = (uint8_t *)packet.data;//raw_data;
//pkt.size = packet.header.dataLen;
//pkt.duration = 24000; //24000 assumed basd on observation
//duration calculation
/*if(packet.header.dataType == TRANSFER_PACKET_TYPE_H264)
{
pkt.duration = vDuration;
}
else*/ if(packet.header.dataType == TRANSFER_PACKET_TYPE_AAC)
{
//pkt.duration = aDuration;
}
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[outStreamIndex];
cn = out_stream->codec;
if(packet.header.dataType == TRANSFER_PACKET_TYPE_AAC)
ret= av_bitstream_filter_filter(aacbsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, packet.data/*pkt.data*/, packet.header.dataLen, pkt.flags & AV_PKT_FLAG_KEY);
if(ret < 0)
{
printf("Failed to execute aac bitstream filter \n\n");
exit(0);
}
//if(packet.header.dataType == TRANSFER_PACKET_TYPE_H264)
// av_bitstream_filter_filter(h264bsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, packet.data/*pkt.data*/, pkt.size, 0);
pkt.flags = 1;
//NOTE : Commented the lines below synced audio and video streams
//av_packet_rescale_ts(&pkt, cn->time_base, out_stream->time_base);
//pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
//enabled on Tuesday
pkt.pos = -1;
pkt.stream_index = outStreamIndex;
//doxygen suggests i use av_write_frame if i am taking care of interleaving
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
//ret = av_write_frame(ofmt_ctx, &pkt);
if (ret < 0)
{
fprintf(stderr, "Error muxing packet\n");
continue;
}
av_free_packet(&pkt);
if(framebuff != NULL)
{
//printf("Framebuff has mem alloc : freeing 2\n\n");
free(framebuff);
framebuff = NULL;
//printf("Freeing successfully \n\n");
}
} -
Open avi file with OpenCV : ffmpeg ?
28 juin 2012, par CTZStefThis question is related to a previous question I asked here.
I read on the Willow Garage website dedicated to OpenCV that we do not have to take care of ffmpeg while installing OpenCV since version 1.2.x. Here it is.
However, some questions asked here on Stackoverflow suggest the contrary.
So, what should I do ? Do I have to recompile OpenCV and do some special operation related to ffmpeg to get it to, finally, open avi file on my Linux system ?
-
Video Conferencing in HTML5 : WebRTC via Web Sockets
14 juin 2012, par silviaA bit over a week ago I gave a presentation at Web Directions Code 2012 in Melbourne. Maxine and John asked me to speak about something related to HTML5 video, so I went for the new shiny : WebRTC – real-time communication in the browser.
I only had 20 min, so I had to make it tight. I wanted to show off video conferencing without special plugins in Google Chrome in just a few lines of code, as is the promise of WebRTC. To a large extent, I achieved this. But I made some interesting discoveries along the way. Demos are in the slide deck.
UPDATE : Opera 12 has been released with WebRTC support.
Housekeeping : if you want to replicate what I have done, you need to install a Google Chrome Web Browser 19+. Then make sure you go to chrome ://flags and activate the MediaStream and PeerConnection experiment(s). Restart your browser and now you can experiment with this feature. Big warning up-front : it’s not production-ready, since there are still changes happening to the spec and there is no compatible implementation by another browser yet.
Here is a brief summary of the steps involved to set up video conferencing in your browser :
- Set up a video element each for the local and the remote video stream.
- Grab the local camera and stream it to the first video element.
- (*) Establish a connection to another person running the same Web page.
- Send the local camera stream on that peer connection.
- Accept the remote camera stream into the second video element.
Now, the most difficult part of all of this – believe it or not – is the signalling part that is required to build the peer connection (marked with (*)). Initially I wanted to run completely without a server and just enter the remote’s IP address to establish the connection. This is, however, not a functionality that the PeerConnection object provides [might this be something to add to the spec ?].
So, you need a server known to both parties that can provide for the handshake to set up the connection. All the examples that I have seen, such as https://apprtc.appspot.com/, use a channel management server on Google’s appengine. I wanted it all working with HTML5 technology, so I decided to use a Web Socket server instead.
I implemented my Web Socket server using node.js (code of websocket server). The video conferencing demo is in the slide deck in an iframe – you can also use the stand-alone html page. Works like a treat.
While it is still using Google’s STUN server to get through NAT, the messaging for setting up the connection is running completely through the Web Socket server. The messages that get exchanged are plain SDP message packets with a session ID. There are OFFER, ANSWER, and OK packets exchanged for each streaming direction. You can see some of it in the below image :
I’m not running a public WebSocket server, so you won’t be able to see this part of the presentation working. But the local loopback video should work.
At the conference, it all went without a hitch (while the wireless played along). I believe you have to host the WebSocket server on the same machine as the Web page, otherwise it won’t work for security reasons.
A whole new world of opportunities lies out there when we get the ability to set up video conferencing on every Web page – scary and exciting at the same time !