
Recherche avancée
Médias (91)
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#1 The Wires
11 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (51)
-
Changer son thème graphique
22 février 2011, parLe thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
Modifier le thème graphique utilisé
Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
Il suffit ensuite de se rendre dans l’espace de configuration du (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (7246)
-
avcodec_encode_video2() error -1 : Could not encode video packet - javacv
7 mars 2016, par 404errorI want to create a video (mp4) from a set of images and want to add a background sound to it. The background sound can either be recorded or a file may be browsed using a content chooser in android.
The following code creates the video when a new audio is recorded in 3gp format. However when i browse an audio file (mp3 for example), it shows this error and the video recorded cannot be played.the error shown is :
org.bytedeco.javacv.FrameRecorder$Exception: avcodec_encode_video2() error -1: Could not encode video packet. :at video_settings$Generate.doInBackground(video_settings.java:298)
the code at video_settings.java:298 is
recorder.record(frame2);
relevant code is :
protected Void doInBackground(Void... arg0) {
try {
FrameGrabber grabber1 = new FFmpegFrameGrabber(paths.get(0));
FrameGrabber grabber2 = new FFmpegFrameGrabber(backgroundSoundPath);
Log.d("hgbj", backgroundSoundPath);
grabber1.start();
grabber2.start();
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(video, 320,
240, grabber2.getAudioChannels());// 320, 240
recorder.setVideoCodec(avcodec.AV_CODEC_ID_MPEG4);//
recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);
recorder.setFormat("mp4");
recorder.setFrameRate(2);
recorder.setVideoBitrate(10 * 1024 * 1024);
recorder.start();
Frame frame1, frame2;
long timeLength = grabber2.getLengthInTime();
System.out.println("total time = " + timeLength);
for (int i = 0; i < paths.size(); i++) {
// record this frame and then record (numFrames*percentageTime[i]/100) number of frames of the audio.
frame1 = grabber1.grabFrame();
long startTime = System.currentTimeMillis();
recorder.setTimestamp(startTime * 1000);
recorder.record(frame1);
boolean first = true;
// while current time - start time < percentage time * total time / 100: record frame2
long temp = timeLength * percentageTime[i] / 100000 + startTime;
while (System.currentTimeMillis() <= temp) {
frame2 = grabber2.grabFrame();
if (frame2 == null) break;
if (first) {
recorder.setTimestamp(startTime * 1000);
first = false;
}
recorder.record(frame2);
}
if (i < paths.size() - 1) {
grabber1.stop();
grabber1 = new FFmpegFrameGrabber(paths.get(i + 1));
grabber1.start();
}
}My question is : if its working for 3gp recorded files why isn’t it working for browsed mp3 files and what should I do to make it work ?
I have tried changing the codecs, frame height width, video bitrate but dont know any way to determine what bitrate etc is compatible with a given codec/format.
I am changing the content uri obtained from file browser into the real path so that’s not the issue. -
how to change the frame rate and capturing image size from IP camera
6 décembre 2015, par rockycaiNow I have a IP camera and I want to capture image from it through RTSP.I use below code and it works well.But the camera’s frame rate is 25/s.So I got a lot of images per second.I don’t want it.And per image is 6.2MB.I also don’t want need to get high quality image.What can I do to slower the frame rate and smaller the size of image ?
#ifndef INT64_C
#define INT64_C(c) (c ## LL)
#define UINT64_C(c) (c ## ULL)
#endif
#ifdef __cplusplus
extern "C" {
#endif
/*Include ffmpeg header file*/
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libswscale></libswscale>swscale.h>
#include
#ifdef __cplusplus
}
#endif
#include
static void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame);
int main (int argc, const char * argv[])
{
AVFormatContext *pFormatCtx;
int i, videoStream;
AVCodecContext *pCodecCtx;
AVCodec *pCodec;
AVFrame *pFrame;
AVFrame *pFrameRGB;
AVPacket packet;
int frameFinished;
int numBytes;
uint8_t *buffer;
// Register all formats and codecs
av_register_all();
// const char *filename="C:\libraries\gfjyp.avi";
// Open video file
//AVDictionary *options = NULL;
//av_dict_set(&options,"rtsp_transport","tcp",0);
if(av_open_input_file(&pFormatCtx, argv[1], NULL, 0, NULL)!=0)
return -1; // Couldn't open file
// Retrieve stream information
if(av_find_stream_info(pFormatCtx)<0)
return -1; // Couldn't find stream information
// Dump information about file onto standard error
dump_format(pFormatCtx, 0, argv[1], false);
// Find the first video stream
videoStream=-1;
for(i=0; inb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
{
videoStream=i;
break;
}
if(videoStream==-1)
return -1; // Didn't find a video stream
// Get a pointer to the codec context for the video stream
pCodecCtx=pFormatCtx->streams[videoStream]->codec;
// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL)
return -1; // Codec not found
// Open codec
if(avcodec_open(pCodecCtx, pCodec)<0)
return -1; // Could not open codec
// Hack to correct wrong frame rates that seem to be generated by some codecs
if(pCodecCtx->time_base.num>1000 && pCodecCtx->time_base.den==1)
pCodecCtx->time_base.den=1000;
//pCodecCtx->time_base.den=1;
//pCodecCtx->time_base.num=1;
// Allocate video frame
pFrame=avcodec_alloc_frame();
// Allocate an AVFrame structure
pFrameRGB=avcodec_alloc_frame();
if(pFrameRGB==NULL)
return -1;
// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
//buffer=malloc(numBytes);
buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
// Assign appropriate parts of buffer to image planes in pFrameRGB
avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
// Read frames and save first five frames to disk
i=0;
while(av_read_frame(pFormatCtx, &packet)>=0)
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
&packet);
// Did we get a video frame?
if(frameFinished)
{
static struct SwsContext *img_convert_ctx;
#if 0
// Older removed code
// Convert the image from its native format to RGB swscale
img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24,
(AVPicture*)pFrame, pCodecCtx->pix_fmt, pCodecCtx->width,
pCodecCtx->height);
// function template, for reference
int sws_scale(struct SwsContext *context, uint8_t* src[], int srcStride[], int srcSliceY,
int srcSliceH, uint8_t* dst[], int dstStride[]);
#endif
// Convert the image into YUV format that SDL uses
if(img_convert_ctx == NULL) {
int w = pCodecCtx->width;
int h = pCodecCtx->height;
img_convert_ctx = sws_getContext(w, h,
pCodecCtx->pix_fmt,
w, h, PIX_FMT_RGB24, SWS_BICUBIC,
NULL, NULL, NULL);
if(img_convert_ctx == NULL) {
fprintf(stderr, "Cannot initialize the conversion context!\n");
exit(1);
}
}
int ret = sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0,
pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
#if 0
// this use to be true, as of 1/2009, but apparently it is no longer true in 3/2009
if(ret) {
fprintf(stderr, "SWS_Scale failed [%d]!\n", ret);
exit(-1);
}
#endif
// Save the frame to disk
if(i++<=1000)
SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
//sleep(1);
}
// Free the RGB image
//free(buffer);
av_free(buffer);
av_free(pFrameRGB);
// Free the YUV frame
av_free(pFrame);
// Close the codec
avcodec_close(pCodecCtx);
// Close the video file
av_close_input_file(pFormatCtx);
return 0;
}
static void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
{
FILE *pFile;
char szFilename[32];
int y;
// Open file
sprintf(szFilename, "frame%d.ppm", iFrame);
pFile=fopen(szFilename, "wb");
if(pFile==NULL)
return;
// Write header
fprintf(pFile, "P6\n%d %d\n255\n", width, height);
// Write pixel data
for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);
// Close file
fclose(pFile);
} -
using ffmpeg to create a wavefile image from opus
29 décembre 2015, par edwardsmarkfI have been trying to use ffmpeg to create a wavefile image from an opus file. so far i have found three different methods but cannot seem to determine which one is the best.
The end result is hopefully to have a sound-wave that is only approx. 55px in height. The image will become part of a css background-image.
Adapted from Generating a waveform using
ffmpeg
:ffmpeg -i file.opus -filter_complex
"showwavespic,colorbalance=bs=0.5:gm=0.3:bh=-0.5,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=black@0.5"
file.pngNext, I found this one (and my favorite because of the simplicity) :
ffmpeg -i test.opus -lavfi showwavespic=split_channels=1:s=1024x800 test.png
And here is what that one looks like :
Finally, this one from FFmpeg Wiki : Waveform, but it seems less efficient using a second utility (gnuplot) rather than just ffmpeg :
ffmpeg -i file.opus -ac 1 -filter:a
aresample=4000 -map 0:a -c:a pcm_s16le -f data - | \
gnuplot -e "set
terminal png size 525,050 ;set output
’file.png’ ;unset key ;unset tics ;unset border ; set
lmargin 0 ;set rmargin 0 ;set tmargin 0 ;set bmargin 0 ; plot ’Option two is my favorite, but i dont like the margins on the top and bottom of the waveforms.
Option three (using gnuplot) makes the best ’shaped’ image for our needs, since the initial spike in sound seems to make the rest almost too small to use (lines tend to almost disappear) when the image is sized at only 50 pixels high.
Any suggestions how might best approach this ? I really understand very little about any of the options I see, except of course for the size. Note too i have 10’s of thousands to process, so naturally i want to make a wise choice at the very beginning.