
Recherche avancée
Autres articles (77)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (7223)
-
FFMpeg : write h264 stream to mp4 container without changes
3 mars 2017, par BumblebeeGood day.
For brevity, the code omits error handling and memory management.
I want to capture h264 video stream and pack it to mp4 container without changes. Since i don’t control the source of stream, i can not make assumptions about stream structure. In this way i must probe input stream.
AVProbeData probeData;
probeData.buf_size = s->BodySize();
probeData.buf = s->GetBody();
probeData.filename = "";
AVInputFormat* inFormat = av_probe_input_format(&probeData, 1);This code correctly defines h264 stream.
Next, i create input format context,
unsigned char* avio_input_buffer = reinterpret_cast<unsigned> (av_malloc(AVIO_BUFFER_SIZE));
AVIOContext* avio_input_ctx = avio_alloc_context(avio_input_buffer, AVIO_BUFFER_SIZE,
0, this, &read_packet, NULL, NULL);
AVFormatContext* ifmt_ctx = avformat_alloc_context();
ifmt_ctx->pb = avio_input_ctx;
int ret = avformat_open_input(&ifmt_ctx, NULL, inFormat, NULL);
</unsigned>set image size,
ifmt_ctx->streams[0]->codec->width = ifmt_ctx->streams[0]->codec->coded_width = width;
ifmt_ctx->streams[0]->codec->height = ifmt_ctx->streams[0]->codec->coded_height = height;create output format context,
unsigned char* avio_output_buffer = reinterpret_cast<unsigned>(av_malloc(AVIO_BUFFER_SIZE));
AVIOContext* avio_output_ctx = avio_alloc_context(avio_output_buffer, AVIO_BUFFER_SIZE,
1, this, NULL, &write_packet, NULL);
AVFormatContext* ofmt_ctx = nullptr;
avformat_alloc_output_context2(&ofmt_ctx, NULL, "mp4", NULL);
ofmt_ctx->pb = avio_output_ctx;
AVDictionary* dict = nullptr;
av_dict_set(&dict, "movflags", "faststart", 0);
av_dict_set(&dict, "movflags", "frag_keyframe+empty_moov", 0);
AVStream* outVideoStream = avformat_new_stream(ofmt_ctx, nullptr);
avcodec_copy_context(outVideoStream->codec, ifmt_ctx->streams[0]->codec);
ret = avformat_write_header(ofmt_ctx, &dict);
</unsigned>Initialization is done. Further there is a shifting packets from h264 stream to mp4 container. I dont calculate pts and dts, because source packet has AV_NOPTS_VALUE in them.
AVPacket pkt;
while (...)
{
ret = av_read_frame(ifmt_ctx, &pkt);
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
av_free_packet(&pkt);
}Further i write trailer and free allocated memory. That is all. Code works and i got playable mp4 file.
Now the problem : the stream characteristics of the resulting file is not completely consisent with the characteristics of the source stream. In particular, fps and bitrate is higher than it should be.
As sample, below is output ffplay.exe for source stream
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'd:/movies/source.mp4':0/0
Metadata:
major_brand : isom
minor_version : 1
compatible_brands: isom
creation_time : 2014-04-14T13:03:54.000000Z
Duration: 00:00:58.08, start: 0.000000, bitrate: 12130 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661),yuv420p, 1920x1080, 12129 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Switch subtitle stream from #-1 to #-1 vq= 1428KB sq= 0B f=0/0
Seek to 49% ( 0:00:28) of total duration ( 0:00:58) B f=0/0
30.32 M-V: -0.030 fd= 87 aq= 0KB vq= 1360KB sq= 0B f=0/0and for resulting stream (contains part of source stream)
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'd:/movies/target.mp4':f=0/0
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1iso6mp41
encoder : Lavf57.56.101
Duration: 00:00:11.64, start: 0.000000, bitrate: 18686 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1920x1080, 18683 kb/s, 38.57 fps, 40 tbr, 90k tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Switch subtitle stream from #-1 to #-1 vq= 2309KB sq= 0B f=0/0
5.70 M-V: 0.040 fd= 127 aq= 0KB vq= 2562KB sq= 0B f=0/0So there is a question, what i missed when copying stream ? I will be grateful for any help.
Best regards
-
Graph-based video processing for .NET
23 octobre 2016, par BorvDoes anyone know a good object-oriented library (preferably high-level, like C# or Java) for working with video and audio streams ?
I wrote an app which fiddles with video and audio streams, feeds and such. The original task was simple :
- grab an RTSP feed
- display original feed(s) on the display
- convert it to a series of h264 ts files
- extract audio into separate MP3 files
- upload videos and audio to the web site (preferably in real time, few minute delay is acceptable)
As you may have already guessed it is about recording events (e.g. lectures) and publishing them on the web.
To pull this out I needed some graph-based non-linear editing for media. Two weeks in, I tried ffmpeg, vlc and WMF. The only library I got to work is ffmpeg, and that comes with lots of "however". WMF required a lot of coding (and I abandoned this path), vlc looked great on paper, but I stumbled across some bugs with input splitting I could not get around (e.g. transcode:es combination flat out refused to work).
So, the question. What are good non-linear editing libraries besides ffmpeg, vlc and wmf/directshow that allow for building video processing graphs with sources, sinks and filters ? Or perhaps good bindings over ffmpeg and vlc allowing to build such graphs ?
-
Wave bytes to buffer
24 août 2016, par Mohammad Abu MusaI am encoding wav input from microphone which comes in four bytes format to ogg format. I think I have a problem shifting the bytes to the correct format here is the code I am using
To explain more I get the audio frames from Google Chrome where I get
data
asconst8
andchannels
, andsamples
.data
field is always in 4 bytes format.I copy the data to a vector of type
int16_t
then I loopuninterleave samples
which I think I am doing wrong. my question is how can I make sure the data is formatted correctly forogg
encoder to handle them correctly ?void EncoderInstance::OnGetBuffer(int32_t result, pp::AudioBuffer buffer) {
if (result != PP_OK)
return;
assert(buffer.GetSampleSize() == PP_AUDIOBUFFER_SAMPLESIZE_16_BITS);
const char* data = static_cast<const>(buffer.GetDataBuffer());
uint32_t channels = buffer.GetNumberOfChannels();
uint32_t samples = buffer.GetNumberOfSamples() / channels;
if (channel_count_ != channels || sample_count_ != samples) {
channel_count_ = channels;
sample_count_ = samples;
samples_.resize(sample_count_ * channel_count_);
// Try (+ 5) to ensure that we pick up a new set of samples between each
// timer-generated repaint.
timer_interval_ = (sample_count_ * 1000) / buffer.GetSampleRate() + 5;
// Start the timer for the first buffer.
if (first_buffer_) {
first_buffer_ = false;
ScheduleNextTimer();
}
}
if(is_audio_recording && is_audio_header_written_)
{
memcpy(samples_.data(), data,
sample_count_ * channel_count_ * sizeof(int16_t));
float **buffer=vorbis_analysis_buffer(&vd,samples);
/* uninterleave samples */
for(i=0;i4;i++)
{
buffer[0][i]=((samples_.at(i*4+1)<<8)|
(0x00ff&(int16_t)samples_.at(i*4)))/32768.f;
buffer[1][i]=((samples_.at(i*4+3)<<8)|
(0x00ff&(int16_t)samples_.at(i*4+2)))/32768.f;
}
vorbis_analysis_wrote(&vd,i);
while(vorbis_analysis_blockout(&vd,&vb)==1){
/* analysis, assume we want to use bitrate management */
vorbis_analysis(&vb,NULL);
vorbis_bitrate_addblock(&vb);
while(vorbis_bitrate_flushpacket(&vd,&op)){
/* weld the packet into the bitstream */
ogg_stream_packetin(&os,&op);
/* write out pages (if any) */
while(!eos){
int result=ogg_stream_pageout(&os,&og);
if(result==0)break;
glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&EncoderInstance::writeAudioHeader));
if(ogg_page_eos(&og))eos=1;
}
}
}
}
audio_track_.RecycleBuffer(buffer);
audio_track_.GetBuffer(callback_factory_.NewCallbackWithOutput(
&EncoderInstance::OnGetBuffer));
}
</const>