
Recherche avancée
Autres articles (30)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Les images
15 mai 2013 -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)
Sur d’autres sites (6407)
-
How can I put MediaRecorder output in avformat_open_input ?
30 mars 2013, par user1914692I want to use ffmpeg to process the video stream from Android MediaRecorder.
It is known that the output of the camera in a local socket instead of a file !
mediaRecorder.setOutputFile(mSender.getFileDescriptor());
where mSender is a local socket.
Related code can see spydroid-ipcamera.
Now I want to use ffmpeg to process it.
How can I put MediaRecorder output in avformat_open_input ?avformat_open_input(&fmt_ctx, ????, NULL, NULL)
What should be put in ????
-
Trying to determine h.264 profile & level pragmatically
23 juin 2012, par kryptobs2000Ideally the solution would be in python and cross platform, but that's probably not too likely, so all I require is it work in linux, and I can use a c extension to interface w/python if necessary. I see there is a python binding for ffmpeg which I was thinking about using, however I can't figure out how to determine the profile and level as it is, with fmmpeg or anything else, much less do it pragmatically. Google is not much help on the matter either.
I've been able to determine what features I'd be looking for if I needed to determine the profile and levels manually then I can do that, but then that leads to the question, can ffmpeg then determine if the video was encoded with that feature set ? I guess what I'm wondering to that effect is, is it perhaps not possible to fully determine the level and specific profile after encoding ? I would think you'd have to know in order to decode it, but maybe not ; that would explain why I can't find any information on it. I've been toying with this on and off for awhile, but recently decided to consider a project I'd been thinking about, but this is one of this big things holding me back.
-
Xuggle - Concatenate two videos - Error - java.lang.RuntimeException : error -1094995529 decoding audio
1er avril 2013, par user2232357I am using the Xuggle API to concatenate two MPEG videos (with Audio inbuilt in the MPEGs).
I am referring to the https://code.google.com/p/xuggle/source/browse/trunk/java/xuggle-xuggler/src/com/xuggle/mediatool/demos/ConcatenateAudioAndVideo.java?r=929. (my both inputs and output are MPEGs).Getting the bellow error.
14:06:50.139 [main] ERROR org.ffmpeg - [mp2 @ 0x7fd54693d000] incomplete frame
java.lang.RuntimeException: error -1094995529 decoding audio
at com.xuggle.mediatool.MediaReader.decodeAudio(MediaReader.java:549)
at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:469)
at com.tav.factory.video.XuggleMediaCreator.concatenateAllVideos(XuggleMediaCreator.java:271)
at com.tav.factory.video.XuggleMediaCreator.main(XuggleMediaCreator.java:446)Can anyone help mw with this ??? Thanks in Advance..
Here is the complete code.
public String concatenateAllVideos(ArrayList<tavtexttoavrequest> list){
String finalPath="";
String sourceUrl1 = "/Users/SSID/WS/SampleTTS/page2/AV_TAVImage2.mpeg";
String sourceUrl2 = "/Users/SSID/WS/SampleTTS/page2/AV_TAVImage3.mpeg";
String destinationUrl = "/Users/SSID/WS/SampleTTS/page2/z_AV_TAVImage_Final23.mpeg";
out.printf("transcode %s + %s -> %s\n", sourceUrl1, sourceUrl2,
destinationUrl);
//////////////////////////////////////////////////////////////////////
// //
// NOTE: be sure that the audio and video parameters match those of //
// your input media //
// //
//////////////////////////////////////////////////////////////////////
// video parameters
final int videoStreamIndex = 0;
final int videoStreamId = 0;
final int width = 400;
final int height = 400;
// audio parameters
final int audioStreamIndex = 1;
final int audioStreamId = 0;
final int channelCount = 1;
final int sampleRate = 16000 ; // Hz 16000 44100;
// create the first media reader
IMediaReader reader1 = ToolFactory.makeReader(sourceUrl1);
// create the second media reader
IMediaReader reader2 = ToolFactory.makeReader(sourceUrl2);
// create the media concatenator
MediaConcatenator concatenator = new MediaConcatenator(audioStreamIndex,
videoStreamIndex);
// concatenator listens to both readers
reader1.addListener(concatenator);
reader2.addListener(concatenator);
// create the media writer which listens to the concatenator
IMediaWriter writer = ToolFactory.makeWriter(destinationUrl);
concatenator.addListener(writer);
// add the video stream
writer.addVideoStream(videoStreamIndex, videoStreamId, width, height);
// add the audio stream
writer.addAudioStream(audioStreamIndex, audioStreamId, channelCount,sampleRate);
// read packets from the first source file until done
try {
while (reader1.readPacket() == null)
;
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// read packets from the second source file until done
try {
while (reader2.readPacket() == null)
;
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// close the writer
writer.close();
return finalPath;
}
static class MediaConcatenator extends MediaToolAdapter
{
// the current offset
private long mOffset = 0;
// the next video timestamp
private long mNextVideo = 0;
// the next audio timestamp
private long mNextAudio = 0;
// the index of the audio stream
private final int mAudoStreamIndex;
// the index of the video stream
private final int mVideoStreamIndex;
/**
* Create a concatenator.
*
* @param audioStreamIndex index of audio stream
* @param videoStreamIndex index of video stream
*/
public MediaConcatenator(int audioStreamIndex, int videoStreamIndex)
{
mAudoStreamIndex = audioStreamIndex;
mVideoStreamIndex = videoStreamIndex;
}
public void onAudioSamples(IAudioSamplesEvent event)
{
IAudioSamples samples = event.getAudioSamples();
// set the new time stamp to the original plus the offset established
// for this media file
long newTimeStamp = samples.getTimeStamp() + mOffset;
// keep track of predicted time of the next audio samples, if the end
// of the media file is encountered, then the offset will be adjusted
// to this time.
mNextAudio = samples.getNextPts();
// set the new timestamp on audio samples
samples.setTimeStamp(newTimeStamp);
// create a new audio samples event with the one true audio stream
// index
super.onAudioSamples(new AudioSamplesEvent(this, samples,
mAudoStreamIndex));
}
public void onVideoPicture(IVideoPictureEvent event)
{
IVideoPicture picture = event.getMediaData();
long originalTimeStamp = picture.getTimeStamp();
// set the new time stamp to the original plus the offset established
// for this media file
long newTimeStamp = originalTimeStamp + mOffset;
// keep track of predicted time of the next video picture, if the end
// of the media file is encountered, then the offset will be adjusted
// to this this time.
//
// You'll note in the audio samples listener above we used
// a method called getNextPts(). Video pictures don't have
// a similar method because frame-rates can be variable, so
// we don't now. The minimum thing we do know though (since
// all media containers require media to have monotonically
// increasing time stamps), is that the next video timestamp
// should be at least one tick ahead. So, we fake it.
mNextVideo = originalTimeStamp + 1;
// set the new timestamp on video samples
picture.setTimeStamp(newTimeStamp);
// create a new video picture event with the one true video stream
// index
super.onVideoPicture(new VideoPictureEvent(this, picture,
mVideoStreamIndex));
}
public void onClose(ICloseEvent event)
{
// update the offset by the larger of the next expected audio or video
// frame time
mOffset = Math.max(mNextVideo, mNextAudio);
if (mNextAudio < mNextVideo)
{
// In this case we know that there is more video in the
// last file that we read than audio. Technically you
// should pad the audio in the output file with enough
// samples to fill that gap, as many media players (e.g.
// Quicktime, Microsoft Media Player, MPlayer) actually
// ignore audio time stamps and just play audio sequentially.
// If you don't pad, in those players it may look like
// audio and video is getting out of sync.
// However kiddies, this is demo code, so that code
// is left as an exercise for the readers. As a hint,
// see the IAudioSamples.defaultPtsToSamples(...) methods.
}
}
public void onAddStream(IAddStreamEvent event)
{
// overridden to ensure that add stream events are not passed down
// the tool chain to the writer, which could cause problems
}
public void onOpen(IOpenEvent event)
{
// overridden to ensure that open events are not passed down the tool
// chain to the writer, which could cause problems
}
public void onOpenCoder(IOpenCoderEvent event)
{
// overridden to ensure that open coder events are not passed down the
// tool chain to the writer, which could cause problems
}
public void onCloseCoder(ICloseCoderEvent event)
{
// overridden to ensure that close coder events are not passed down the
// tool chain to the writer, which could cause problems
}
}
</tavtexttoavrequest>