
Recherche avancée
Autres articles (72)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Prérequis à l’installation
31 janvier 2010, parPréambule
Cet article n’a pas pour but de détailler les installations de ces logiciels mais plutôt de donner des informations sur leur configuration spécifique.
Avant toute chose SPIPMotion tout comme MediaSPIP est fait pour tourner sur des distributions Linux de type Debian ou dérivées (Ubuntu...). Les documentations de ce site se réfèrent donc à ces distributions. Il est également possible de l’utiliser sur d’autres distributions Linux mais aucune garantie de bon fonctionnement n’est possible.
Il (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (3547)
-
ffmpeg trying to remove the first few seconds of video
21 octobre 2011, par zeroasteriskI'm trying to remove a few seconds off the front of a video stream (audio already excluded), and I'm getting strange results : I would think that for every second I increase -ss by, the resulting file would be a second shorter... that doesn't seem to be the case.
original ==> 01:04:52.84
-ss 19 ==> 01:04:42.84 (diff = 10) [command history shown below]
-ss 20 ==> 01:04:32.84 (diff = 20) [command history shown below]
-ss 21 ==> 01:04:32.84 (diff = 20)
-ss 25 ==> 01:04:32.84 (diff = 20)
-ss 0:0:25.0 ==> 01:04:32.84 (diff = 20)
-ss 0:0:25.5 ==> 01:04:32.84 (diff = 20) [command history shown below]Command :
ffmpeg -ss # -i temp.mp4 -y -vcodec copy temp_croppedFromStart.mp4
Here's the command history for 19 & 20
# ffmpeg -ss 19 -i temp.mp4 -y -vcodec copy temp_croppedFromStart.mp4; ffmpeg -i temp.mp4 2>&1 | grep Duration; ffmpeg -i temp_croppedFromStart.mp4 2>&1 | grep Duration
ffmpeg version N-31809-g9acffed, Copyright (c) 2000-2011 the FFmpeg developers
built on Aug 10 2011 21:25:11 with gcc 4.4.5
configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab
libavutil 51. 11. 1 / 51. 11. 1
libavcodec 53. 10. 0 / 53. 10. 0
libavformat 53. 6. 0 / 53. 6. 0
libavdevice 53. 2. 0 / 53. 2. 0
libavfilter 2. 28. 1 / 2. 28. 1
libswscale 2. 0. 0 / 2. 0. 0
libpostproc 51. 2. 0 / 51. 2. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'temp.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.6.0
Duration: 01:04:52.84, start: 0.000000, bitrate: 553 kb/s
Stream #0.0(und): Video: h264 (Constrained Baseline), yuv420p, 960x640, 552 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc
Metadata:
creation_time : 1970-01-01 00:00:00
Output #0, mp4, to 'temp_croppedFromStart.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.6.0
Stream #0.0(und): Video: libx264, yuv420p, 960x640, q=2-31, 552 kb/s, 25 tbn, 25 tbc
Metadata:
creation_time : 1970-01-01 00:00:00
Stream mapping:
Stream #0.0 -> #0.0
Press [q] to stop, [?] for help
frame=97071 fps=42523 q=-1.0 Lsize= 262684kB time=01:04:33.84 bitrate= 555.5kbits/s
video:261923kB audio:0kB global headers:0kB muxing overhead 0.290418%
Duration: 01:04:52.84, start: 0.000000, bitrate: 553 kb/s
Duration: 01:04:42.84, start: 0.000000, bitrate: 554 kb/s
# ffmpeg -ss 20 -i temp.mp4 -y -vcodec copy temp_croppedFromStart.mp4; ffmpeg -i temp.mp4 2>&1 | grep Duration; ffmpeg -i temp_croppedFromStart.mp4 2>&1 | grep Duration
ffmpeg version N-31809-g9acffed, Copyright (c) 2000-2011 the FFmpeg developers
built on Aug 10 2011 21:25:11 with gcc 4.4.5
configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab
libavutil 51. 11. 1 / 51. 11. 1
libavcodec 53. 10. 0 / 53. 10. 0
libavformat 53. 6. 0 / 53. 6. 0
libavdevice 53. 2. 0 / 53. 2. 0
libavfilter 2. 28. 1 / 2. 28. 1
libswscale 2. 0. 0 / 2. 0. 0
libpostproc 51. 2. 0 / 51. 2. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'temp.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.6.0
Duration: 01:04:52.84, start: 0.000000, bitrate: 553 kb/s
Stream #0.0(und): Video: h264 (Constrained Baseline), yuv420p, 960x640, 552 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc
Metadata:
creation_time : 1970-01-01 00:00:00
Output #0, mp4, to 'temp_croppedFromStart.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.6.0
Stream #0.0(und): Video: libx264, yuv420p, 960x640, q=2-31, 552 kb/s, 25 tbn, 25 tbc
Metadata:
creation_time : 1970-01-01 00:00:00
Stream mapping:
Stream #0.0 -> #0.0
Press [q] to stop, [?] for help
frame=96821 fps=47168 q=-1.0 Lsize= 262003kB time=01:04:32.84 bitrate= 554.2kbits/s
video:261244kB audio:0kB global headers:0kB muxing overhead 0.290410%
Duration: 01:04:52.84, start: 0.000000, bitrate: 553 kb/s
Duration: 01:04:32.84, start: 0.000000, bitrate: 554 kb/s
# ffmpeg -ss 0:0:25.5 -i temp.mp4 -y -vcodec copy temp_croppedFromStart.mp4; ffmpeg -i temp.mp4 2>&1 | grep Duration; ffmpeg -i temp_croppedFromStart.mp4 2>&1 | grep Duration
ffmpeg version N-31809-g9acffed, Copyright (c) 2000-2011 the FFmpeg developers
built on Aug 10 2011 21:25:11 with gcc 4.4.5
configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab
libavutil 51. 11. 1 / 51. 11. 1
libavcodec 53. 10. 0 / 53. 10. 0
libavformat 53. 6. 0 / 53. 6. 0
libavdevice 53. 2. 0 / 53. 2. 0
libavfilter 2. 28. 1 / 2. 28. 1
libswscale 2. 0. 0 / 2. 0. 0
libpostproc 51. 2. 0 / 51. 2. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'temp.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.6.0
Duration: 01:04:52.84, start: 0.000000, bitrate: 553 kb/s
Stream #0.0(und): Video: h264 (Constrained Baseline), yuv420p, 960x640, 552 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc
Metadata:
creation_time : 1970-01-01 00:00:00
Output #0, mp4, to 'temp_croppedFromStart.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.6.0
Stream #0.0(und): Video: libx264, yuv420p, 960x640, q=2-31, 552 kb/s, 25 tbn, 25 tbc
Metadata:
creation_time : 1970-01-01 00:00:00
Stream mapping:
Stream #0.0 -> #0.0
Press [q] to stop, [?] for help
frame=96821 fps=45920 q=-1.0 Lsize= 262003kB time=01:04:27.32 bitrate= 555.0kbits/s
video:261244kB audio:0kB global headers:0kB muxing overhead 0.290424%
Duration: 01:04:52.84, start: 0.000000, bitrate: 553 kb/s
Duration: 01:04:32.84, start: 0.000000, bitrate: 554 kb/s -
FFmpeg - How to scale and then drawtext ?
19 janvier 2015, par vanderjasI’m trying to scale and crop a video and then apply some text over the top.
I’m able to achieve each of these successfully using the following filters :
Scale & Crop
-vf scale=-1:640,crop=640:640:0:0
Draw Text
-vf drawtext="fontfile=Gotham-Book.ttf: text=TextHere:fontsize=72:fontcolor=gray@0.25:x=10:y=10”
I presume I need to use some kind of filter complex but I am not having any luck.
Can anyone help ?
Thanks in advance !
-
using libav instead of ffmpeg
21 janvier 2015, par n00bieI want to streaming video over http, i am using ogg(theora + vorbis), now i have sender and receiver, and i can run them using command line :
Sender :
ffmpeg -f video4linux2 -s 320x240 -i /dev/mycam -codec:v libtheora -qscale:v 5 -f ogg http://127.0.0.1:8080
Receiver :
sudo gst-launch-0.10 tcpserversrc port = 8080 ! oggdemux ! theoradec ! autovideosink
Now, sender sends both audio and video, but receiver plays only video.
It works perfect, but now i want not to use ffmpeg and use only libav* instead.
Here’s my class for streaming :
class VCORE_LIBRARY_EXPORT VVideoWriter : private boost::noncopyable
{
public:
VVideoWriter( );
~VVideoWriter( );
bool openFile( const std::string& name,
int fps, int videoBitrate, int width, int height,
int audioSampleRate, bool stereo, int audioBitrate );
void close( );
bool writeVideoFrame( const uint8_t* image, int64_t timestamp );
bool writeAudioFrame( const int16_t* data, int64_t timestamp );
int audioFrameSize( ) const;
private:
AVFrame *m_videoFrame;
AVFrame *m_audioFrame;
AVFormatContext *m_context;
AVStream *m_videoStream;
AVStream *m_audioStream;
int64_t m_startTime;
};Initialization :
bool VVideoWriter::openFile( const std::string& name,
int fps, int videoBitrate, int width, int height,
int audioSampleRate, bool stereo, int audioBitrate )
{
if( ! m_context )
{
// initalize the AV context
m_context = avformat_alloc_context( );
assert( m_context );
// get the output format
m_context->oformat = av_guess_format( "ogg", name.c_str( ), nullptr );
if( m_context->oformat )
{
strcpy( m_context->filename, name.c_str( ) );
auto codecID = AV_CODEC_ID_THEORA;
auto codec = avcodec_find_encoder( codecID );
if( codec )
{
m_videoStream = avformat_new_stream( m_context, codec );
assert( m_videoStream );
// initalize codec
auto codecContext = m_videoStream->codec;
bool globalHeader = m_context->oformat->flags & AVFMT_GLOBALHEADER;
if( globalHeader )
codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
codecContext->codec_id = codecID;
codecContext->codec_type = AVMEDIA_TYPE_VIDEO;
codecContext->width = width;
codecContext->height = height;
codecContext->time_base.den = fps;
codecContext->time_base.num = 1;
codecContext->bit_rate = videoBitrate;
codecContext->pix_fmt = PIX_FMT_YUV420P;
codecContext->flags |= CODEC_FLAG_QSCALE;
codecContext->global_quality = FF_QP2LAMBDA * 5;
int res = avcodec_open2( codecContext, codec, nullptr );
if( res >= 0 )
{
auto codecID = AV_CODEC_ID_VORBIS;
auto codec = avcodec_find_encoder( codecID );
if( codec )
{
m_audioStream = avformat_new_stream( m_context, codec );
assert( m_audioStream );
// initalize codec
auto codecContext = m_audioStream->codec;
bool globalHeader = m_context->oformat->flags & AVFMT_GLOBALHEADER;
if( globalHeader )
codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
codecContext->codec_id = codecID;
codecContext->codec_type = AVMEDIA_TYPE_AUDIO;
codecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
codecContext->bit_rate = audioBitrate;
codecContext->sample_rate = audioSampleRate;
codecContext->channels = stereo ? 2 : 1;
codecContext->channel_layout = stereo ? AV_CH_LAYOUT_STEREO : AV_CH_LAYOUT_MONO;
res = avcodec_open2( codecContext, codec, nullptr );
if( res >= 0 )
{
// try to open the file
if( avio_open( &m_context->pb, m_context->filename, AVIO_FLAG_WRITE ) >= 0 )
{
m_audioFrame->nb_samples = codecContext->frame_size;
m_audioFrame->format = codecContext->sample_fmt;
m_audioFrame->channel_layout = codecContext->channel_layout;
boost::posix_time::ptime time_t_epoch( boost::gregorian::date( 1970, 1, 1 ) );
m_context->start_time_realtime = ( boost::posix_time::microsec_clock::universal_time( ) - time_t_epoch ).total_microseconds( );
m_startTime = -1;
// write the header
if( avformat_write_header( m_context, nullptr ) >= 0 )
{
return true;
}
else std::cerr << "VVideoWriter: failed to write video header" << std::endl;
}
else std::cerr << "VVideoWriter: failed to open video file " << name << std::endl;
}
else std::cerr << "VVideoWriter: failed to initialize audio codec" << std::endl;
}
else std::cerr << "VVideoWriter: requested audio codec is not supported" << std::endl;
}
else std::cerr << "VVideoWriter: failed to initialize video codec" << std::endl;
}
else std::cerr << "VVideoWriter: requested video codec is not supported" << std::endl;
}
else std::cerr << "VVideoWriter: requested video format is not supported" << std::endl;
avformat_free_context( m_context );
m_context = nullptr;
m_videoStream = nullptr;
m_audioStream = nullptr;
}
return false;
}Writing video :
bool VVideoWriter::writeVideoFrame( const uint8_t* image, int64_t timestamp )
{
if( m_context ) {
auto codecContext = m_videoStream->codec;
avpicture_fill( reinterpret_cast( m_videoFrame ),
const_cast( image ),
codecContext->pix_fmt, codecContext->width, codecContext->height );
AVPacket pkt;
av_init_packet( & pkt );
pkt.data = nullptr;
pkt.size = 0;
int gotPacket = 0;
if( ! avcodec_encode_video2( codecContext, &pkt, m_videoFrame, & gotPacket ) ) {
if( gotPacket == 1 ) {
pkt.stream_index = m_videoStream->index;
int res;
{
pkt.pts = AV_NOPTS_VALUE;
pkt.dts = AV_NOPTS_VALUE;
pkt.stream_index = m_videoStream->index;
res = av_write_frame( m_context, &pkt );
}
av_free_packet( & pkt );
return res >= 0;
}
assert( ! pkt.size );
return true;
}
}
return false;
}Writing audio (now i write test dummy audio) :
bool VVideoWriter::writeAudioFrame( const int16_t* data, int64_t timestamp )
{
if( m_context ) {
auto codecContext = m_audioStream->codec;
int buffer_size = av_samples_get_buffer_size(nullptr, codecContext->channels, codecContext->frame_size, codecContext->sample_fmt, 0);
float *samples = (float*)av_malloc(buffer_size);
for (int i = 0; i < buffer_size / sizeof(float); i++)
samples[i] = 1000. * sin((double)i/2.);
int ret = avcodec_fill_audio_frame( m_audioFrame, codecContext->channels, codecContext->sample_fmt, (const uint8_t*)samples, buffer_size, 0);
assert( ret >= 0 );
(void)(ret);
AVPacket pkt;
av_init_packet( & pkt );
pkt.data = nullptr;
pkt.size = 0;
int gotPacket = 0;
if( ! avcodec_encode_audio2( codecContext, &pkt, m_audioFrame, & gotPacket ) ) {
if( gotPacket == 1 ) {
pkt.stream_index = m_audioStream->index;
int res;
{
pkt.pts = AV_NOPTS_VALUE;
pkt.dts = AV_NOPTS_VALUE;
pkt.stream_index = m_audioStream->index;
res = av_write_frame( m_context, &pkt );
}
av_free_packet( & pkt );
return res >= 0;
}
assert( ! pkt.size );
return true;
}
return false;
}
return false;
}Here’s test example (i send video from webcam and dummy audio) :
class TestVVideoWriter : public sigslot::has_slots<>
{
public:
TestVVideoWriter( ) :
m_fileOpened( false )
{
}
void onCapturedFrame( cricket::VideoCapturer*, const cricket::CapturedFrame* capturedFrame )
{
if( m_fileOpened ) {
m_writer.writeVideoFrame( reinterpret_cast<const>( capturedFrame->data ),
capturedFrame->time_stamp / 1000 );
m_writer.writeAudioFrame( nullptr , 0 );
} else {
m_fileOpened = m_writer.openFile( "http://127.0.0.1:8080",
15, 40000, capturedFrame->width, capturedFrame->height,
16000, false, 64000 );
}
}
public:
vcore::VVideoWriter m_writer;
bool m_fileOpened;
};
TestVVideoWriter testWriter;
BOOST_AUTO_TEST_SUITE(TEST_VIDEO_WRITER)
BOOST_AUTO_TEST_CASE(testWritingVideo)
{
cricket::LinuxDeviceManager deviceManager;
std::vector devs;
if( deviceManager.GetVideoCaptureDevices( &devs ) ) {
if( devs.size( ) ) {
boost::shared_ptr camera( deviceManager.CreateVideoCapturer( devs[ 0 ] ) );
if( camera ) {
cricket::VideoFormat format( 320, 240, cricket::VideoFormat::FpsToInterval( 30 ),
camera->GetSupportedFormats( )->front( ).fourcc );
cricket::VideoFormat best;
if( camera->GetBestCaptureFormat( format, &best ) ) {
camera->SignalFrameCaptured.connect( &testWriter, &TestVVideoWriter::onCapturedFrame );
if( camera->Start( best ) != cricket::CS_FAILED ) {
boost::this_thread::sleep( boost::posix_time::seconds( 10 ) );
return;
}
}
}
}
}
std::cerr << "Problem has occured with camera" << std::endl;
}
BOOST_AUTO_TEST_SUITE_END() // TEST_VIDEO_WRITER
</const>But, in this case, gstreamer start playing video only when my test program stop executing (after 10 seconds in this case). It does not suit me, i want gstreamer start playing immediately after starting my test program.
Could someone help me ?
P.S. Sorry for my English.