
Recherche avancée
Médias (17)
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (62)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (8165)
-
Using ffprobe to get number of keyframes in raw AVI file *without* processing entire file ?
26 juillet 2018, par aggieNick02This question and answer cover how to get the framecount and keyframe count from an AVI file, which is very useful. I’ve got a raw AVI file and want to count the number of keyframes (equivalent to non-dropped frames for raw AVI), but it takes a long time to process through a raw AVI file.
There is some way to get this information without fully processing the file, as VirtualDub provides both framecount and key framecount in the file information, as well as total keyframe size, almost instantly for a 25-second raw 1920x1080 AVI. But ffprobe requires count_frames to populate nb_read_frames, which takes some good processing time.
I can do some math with the file’s size and the frame’s width/height/format to get a fairly good estimate of the number of frames, but I’m worried the overhead of the container could be enough to throw the math off for very short clips. (For my 25 second clip, I get 1286.12 frames, when there are really 1286.)
Any thoughts on if there is a way to get this information programatically with ffprobe or ffmpeg without processing the whole file ? Or with another API on windows ?
-
Android. Problems with AudioTrack class. Sound sometimes lost
6 juin 2018, par bukka.whI have found open source video player for Android, which uses ffmpeg to decode video.
I have some problems with audio, that sometimes plays with jerks, but video picture is shown well. The basic idea of player is that audio and video are decoded in two different streams, and then in the third stream the are passed back, video picture is shown on SurfaceView and video sound is passed in byte array to AudioTrack and then plays. But sometimes sound is lost or playing with jerks. Can anyone give me start point for what to do (some basic concepts). May be I should change buffer size for AudioTrack or add some flags to it. Here is a piece of code, where AudioTrack class is created.private AudioTrack prepareAudioTrack(int sampleRateInHz,
int numberOfChannels) {
for (;;) {
int channelConfig;
if (numberOfChannels == 1) {
channelConfig = AudioFormat.CHANNEL_OUT_MONO;
} else if (numberOfChannels == 2) {
channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
} else if (numberOfChannels == 3) {
channelConfig = AudioFormat.CHANNEL_OUT_FRONT_CENTER
| AudioFormat.CHANNEL_OUT_FRONT_RIGHT
| AudioFormat.CHANNEL_OUT_FRONT_LEFT;
} else if (numberOfChannels == 4) {
channelConfig = AudioFormat.CHANNEL_OUT_QUAD;
} else if (numberOfChannels == 5) {
channelConfig = AudioFormat.CHANNEL_OUT_QUAD
| AudioFormat.CHANNEL_OUT_LOW_FREQUENCY;
} else if (numberOfChannels == 6) {
channelConfig = AudioFormat.CHANNEL_OUT_5POINT1;
} else if (numberOfChannels == 8) {
channelConfig = AudioFormat.CHANNEL_OUT_7POINT1;
} else {
channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
}
try {
Log.d("MyLog","Creating Audio player");
int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHz,
channelConfig, AudioFormat.ENCODING_PCM_16BIT);
AudioTrack audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC, sampleRateInHz,
channelConfig, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
return audioTrack;
} catch (IllegalArgumentException e) {
if (numberOfChannels > 2) {
numberOfChannels = 2;
} else if (numberOfChannels > 1) {
numberOfChannels = 1;
} else {
throw e;
}
}
}
}And this is a piece of native code where sound bytes are written to AudioTrack
int player_write_audio(struct DecoderData *decoder_data, JNIEnv *env,
int64_t pts, uint8_t *data, int data_size, int original_data_size) {
struct Player *player = decoder_data->player;
int stream_no = decoder_data->stream_no;
int err = ERROR_NO_ERROR;
int ret;
AVCodecContext * c = player->input_codec_ctxs[stream_no];
AVStream *stream = player->input_streams[stream_no];
LOGI(10, "player_write_audio Writing audio frame")
jbyteArray samples_byte_array = (*env)->NewByteArray(env, data_size);
if (samples_byte_array == NULL) {
err = -ERROR_NOT_CREATED_AUDIO_SAMPLE_BYTE_ARRAY;
goto end;
}
if (pts != AV_NOPTS_VALUE) {
player->audio_clock = av_rescale_q(pts, stream->time_base, AV_TIME_BASE_Q);
LOGI(9, "player_write_audio - read from pts")
} else {
int64_t sample_time = original_data_size;
sample_time *= 1000000ll;
sample_time /= c->channels;
sample_time /= c->sample_rate;
sample_time /= av_get_bytes_per_sample(c->sample_fmt);
player->audio_clock += sample_time;
LOGI(9, "player_write_audio - added")
}
enum WaitFuncRet wait_ret = player_wait_for_frame(player,
player->audio_clock + AUDIO_TIME_ADJUST_US, stream_no);
if (wait_ret == WAIT_FUNC_RET_SKIP) {
goto end;
}
LOGI(10, "player_write_audio Writing sample data")
jbyte *jni_samples = (*env)->GetByteArrayElements(env, samples_byte_array,
NULL);
memcpy(jni_samples, data, data_size);
(*env)->ReleaseByteArrayElements(env, samples_byte_array, jni_samples, 0);
LOGI(10, "player_write_audio playing audio track");
ret = (*env)->CallIntMethod(env, player->audio_track,
player->audio_track_write_method, samples_byte_array, 0, data_size);
jthrowable exc = (*env)->ExceptionOccurred(env);
if (exc) {
err = -ERROR_PLAYING_AUDIO;
LOGE(3, "Could not write audio track: reason in exception");
// TODO maybe release exc
goto free_local_ref;
}
if (ret < 0) {
err = -ERROR_PLAYING_AUDIO;
LOGE(3,
"Could not write audio track: reason: %d look in AudioTrack.write()", ret);
goto free_local_ref;
}
free_local_ref:
LOGI(10, "player_write_audio releasing local ref");
(*env)->DeleteLocalRef(env, samples_byte_array);
end: return err;}
I will be pleased for any help !!!! Thank you very much !!!!
-
Unity : Converting Texture2D to YUV420P using FFmpeg
23 juillet 2021, par strong_kobayashiI'm trying to create a game in Unity where each frame is rendered into a texture and then put together into a video using FFmpeg. The output created by FFmpeg should eventually be sent over the network to a client UI. However, I'm struggling mainly with the part where a frame is caught, and passed to an unsafe method as a byte array where it should be processed further by FFmpeg. The wrapper I'm using is FFmpeg.AutoGen.



The render to texture method :



private IEnumerator CaptureFrame()
{
 yield return new WaitForEndOfFrame();

 RenderTexture.active = rt;
 frame.ReadPixels(rect, 0, 0);
 frame.Apply();

 bytes = frame.GetRawTextureData();

 EncodeAndWrite(bytes, bytes.Length);
}




The unsafe encoding method so far :



private unsafe void EncodeAndWrite(byte[] bytes, int size)
{
 GCHandle pinned = GCHandle.Alloc(bytes, GCHandleType.Pinned);
 IntPtr address = pinned.AddrOfPinnedObject();

 sbyte** inData = (sbyte**)address;
 fixed(int* lineSize = new int[1])
 {
 lineSize[0] = 4 * textureWidth;
 // Convert RGBA to YUV420P
 ffmpeg.sws_scale(sws, inData, lineSize, 0, codecContext->width, inputFrame->extended_data, inputFrame->linesize);
 }

 inputFrame->pts = frameCounter++;

 if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
 throw new ApplicationException("Error sending a frame for encoding!");

 pkt = new AVPacket();
 fixed(AVPacket* packet = &pkt)
 ffmpeg.av_init_packet(packet);
 pkt.data = null;
 pkt.size = 0;

 pinned.Free();
 ...
}




sws_scale
takes asbyte**
as the second parameter, therefore I'm trying to convert the input byte array tosbyte**
by first pinning it withGCHandle
and doing an explicit type conversion afterwards. I don't know if that's the correct way, though.


Moreover, the condition
if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
alwasy throws an ApplicationException, where I also really don't know why this happens.codecContext
andinputFrame
are my AVCodecContext and AVFrame objects, respectively, and the fields are defined as the following :


codecContext



codecContext = ffmpeg.avcodec_alloc_context3(codec);
codecContext->bit_rate = 400000;
codecContext->width = textureWidth;
codecContext->height = textureHeight;

AVRational timeBase = new AVRational();
timeBase.num = 1;
timeBase.den = (int)fps;
codecContext->time_base = timeBase;
videoAVStream->time_base = timeBase;

AVRational frameRate = new AVRational();
frameRate.num = (int)fps;
frameRate.den = 1;
codecContext->framerate = frameRate;

codecContext->gop_size = 10;
codecContext->max_b_frames = 1;
codecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;




inputFrame



inputFrame = ffmpeg.av_frame_alloc();
inputFrame->format = (int)codecContext->pix_fmt;
inputFrame->width = textureWidth;
inputFrame->height = textureHeight;
inputFrame->linesize[0] = inputFrame->width;




Any help in fixing the issue would be greatly appreciated :)