
Recherche avancée
Médias (10)
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon seed (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
The four of us are dying (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Corona radiata (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Lights in the sky (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (53)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (6608)
-
Java/OpenCV - How to do a lossless h264 video writing in openCV ?
15 août 2018, par JohnDoeAnonin the last time I had some struggle with the VideoWriter in openCV under java. I want to write a video file in a *.mp4 container with h.264 codec - but I see no option to toggle bitrate or quality in openCV VideoWriter. I did build openCV with ffmpeg as backend. I just want to write the video file in exact quality values as the original input video.
I also have some code to do the jobimport org.opencv.core.Mat;
import org.opencv.core.Size;
import org.opencv.videoio.VideoWriter;
import org.opencv.videoio.Videoio;
public class VideoOutput
{
private final int H264_CODEC = 33;
private VideoWriter writer;
private String filename;
public VideoOutput (String filename)
{
writer = null;
this.filename = filename;
}
public void initialize(double framesPerSecond, int height, int width) throws Exception
{
this.writer = new VideoWriter();
this.writer.open(filename, H264_CODEC, framesPerSecond, new Size(width, height));
if(!writer.isOpened())
{
Logging.LOGGER.severe("Could not create video output file " + filename + "\n");
throw new Exception("Could not create video output file " + filename + "\n");
}
}
public void setFrame(VideoFrame videoFrame) throws Exception
{
if (writer.isOpened())
{
Mat frame = ImageUtil.imageToMat(videoFrame.getFrame());
writer.write(frame);
frame.release();
}
}I hoped the VideoWriter gives some options to do the job but it seems not the way.
So is there an option or flag that I am missing for lossless h264 video writing under openCV and java OR maybe there is another way to do this ?
Please help me - if you have done this already I really would appreciate some example code to get things done.UPDATE
I do have now a solution that fits for my application, so here it is :
String fps = Double.toString(this.config.getInputConfig().getFramesPerSecond());
Runtime.getRuntime().exec(
new String[] {
"C:\\ffmpeg-3.4.2-win64-static\\bin\\ffmpeg.exe",
"-framerate",
fps,
"-i",
imageOutputPath + File.separator + "%01d.jpg",
"-c:v",
"libx265",
"-crf",
"1",
imageOutputPath + File.separator + "ffmpeg.mp4"
}
);Credits to @Gyan who gave me the correct ffmpeg call in this post :
Win/ffmpeg - How to generate a video from images under ffmpeg ?
Greets
-
Unity : Converting Texture2D to YUV420P using FFmpeg
23 juillet 2021, par strong_kobayashiI'm trying to create a game in Unity where each frame is rendered into a texture and then put together into a video using FFmpeg. The output created by FFmpeg should eventually be sent over the network to a client UI. However, I'm struggling mainly with the part where a frame is caught, and passed to an unsafe method as a byte array where it should be processed further by FFmpeg. The wrapper I'm using is FFmpeg.AutoGen.



The render to texture method :



private IEnumerator CaptureFrame()
{
 yield return new WaitForEndOfFrame();

 RenderTexture.active = rt;
 frame.ReadPixels(rect, 0, 0);
 frame.Apply();

 bytes = frame.GetRawTextureData();

 EncodeAndWrite(bytes, bytes.Length);
}




The unsafe encoding method so far :



private unsafe void EncodeAndWrite(byte[] bytes, int size)
{
 GCHandle pinned = GCHandle.Alloc(bytes, GCHandleType.Pinned);
 IntPtr address = pinned.AddrOfPinnedObject();

 sbyte** inData = (sbyte**)address;
 fixed(int* lineSize = new int[1])
 {
 lineSize[0] = 4 * textureWidth;
 // Convert RGBA to YUV420P
 ffmpeg.sws_scale(sws, inData, lineSize, 0, codecContext->width, inputFrame->extended_data, inputFrame->linesize);
 }

 inputFrame->pts = frameCounter++;

 if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
 throw new ApplicationException("Error sending a frame for encoding!");

 pkt = new AVPacket();
 fixed(AVPacket* packet = &pkt)
 ffmpeg.av_init_packet(packet);
 pkt.data = null;
 pkt.size = 0;

 pinned.Free();
 ...
}




sws_scale
takes asbyte**
as the second parameter, therefore I'm trying to convert the input byte array tosbyte**
by first pinning it withGCHandle
and doing an explicit type conversion afterwards. I don't know if that's the correct way, though.


Moreover, the condition
if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
alwasy throws an ApplicationException, where I also really don't know why this happens.codecContext
andinputFrame
are my AVCodecContext and AVFrame objects, respectively, and the fields are defined as the following :


codecContext



codecContext = ffmpeg.avcodec_alloc_context3(codec);
codecContext->bit_rate = 400000;
codecContext->width = textureWidth;
codecContext->height = textureHeight;

AVRational timeBase = new AVRational();
timeBase.num = 1;
timeBase.den = (int)fps;
codecContext->time_base = timeBase;
videoAVStream->time_base = timeBase;

AVRational frameRate = new AVRational();
frameRate.num = (int)fps;
frameRate.den = 1;
codecContext->framerate = frameRate;

codecContext->gop_size = 10;
codecContext->max_b_frames = 1;
codecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;




inputFrame



inputFrame = ffmpeg.av_frame_alloc();
inputFrame->format = (int)codecContext->pix_fmt;
inputFrame->width = textureWidth;
inputFrame->height = textureHeight;
inputFrame->linesize[0] = inputFrame->width;




Any help in fixing the issue would be greatly appreciated :)


-
Android. Problems with AudioTrack class. Sound sometimes lost
6 juin 2018, par bukka.whI have found open source video player for Android, which uses ffmpeg to decode video.
I have some problems with audio, that sometimes plays with jerks, but video picture is shown well. The basic idea of player is that audio and video are decoded in two different streams, and then in the third stream the are passed back, video picture is shown on SurfaceView and video sound is passed in byte array to AudioTrack and then plays. But sometimes sound is lost or playing with jerks. Can anyone give me start point for what to do (some basic concepts). May be I should change buffer size for AudioTrack or add some flags to it. Here is a piece of code, where AudioTrack class is created.private AudioTrack prepareAudioTrack(int sampleRateInHz,
int numberOfChannels) {
for (;;) {
int channelConfig;
if (numberOfChannels == 1) {
channelConfig = AudioFormat.CHANNEL_OUT_MONO;
} else if (numberOfChannels == 2) {
channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
} else if (numberOfChannels == 3) {
channelConfig = AudioFormat.CHANNEL_OUT_FRONT_CENTER
| AudioFormat.CHANNEL_OUT_FRONT_RIGHT
| AudioFormat.CHANNEL_OUT_FRONT_LEFT;
} else if (numberOfChannels == 4) {
channelConfig = AudioFormat.CHANNEL_OUT_QUAD;
} else if (numberOfChannels == 5) {
channelConfig = AudioFormat.CHANNEL_OUT_QUAD
| AudioFormat.CHANNEL_OUT_LOW_FREQUENCY;
} else if (numberOfChannels == 6) {
channelConfig = AudioFormat.CHANNEL_OUT_5POINT1;
} else if (numberOfChannels == 8) {
channelConfig = AudioFormat.CHANNEL_OUT_7POINT1;
} else {
channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
}
try {
Log.d("MyLog","Creating Audio player");
int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHz,
channelConfig, AudioFormat.ENCODING_PCM_16BIT);
AudioTrack audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC, sampleRateInHz,
channelConfig, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
return audioTrack;
} catch (IllegalArgumentException e) {
if (numberOfChannels > 2) {
numberOfChannels = 2;
} else if (numberOfChannels > 1) {
numberOfChannels = 1;
} else {
throw e;
}
}
}
}And this is a piece of native code where sound bytes are written to AudioTrack
int player_write_audio(struct DecoderData *decoder_data, JNIEnv *env,
int64_t pts, uint8_t *data, int data_size, int original_data_size) {
struct Player *player = decoder_data->player;
int stream_no = decoder_data->stream_no;
int err = ERROR_NO_ERROR;
int ret;
AVCodecContext * c = player->input_codec_ctxs[stream_no];
AVStream *stream = player->input_streams[stream_no];
LOGI(10, "player_write_audio Writing audio frame")
jbyteArray samples_byte_array = (*env)->NewByteArray(env, data_size);
if (samples_byte_array == NULL) {
err = -ERROR_NOT_CREATED_AUDIO_SAMPLE_BYTE_ARRAY;
goto end;
}
if (pts != AV_NOPTS_VALUE) {
player->audio_clock = av_rescale_q(pts, stream->time_base, AV_TIME_BASE_Q);
LOGI(9, "player_write_audio - read from pts")
} else {
int64_t sample_time = original_data_size;
sample_time *= 1000000ll;
sample_time /= c->channels;
sample_time /= c->sample_rate;
sample_time /= av_get_bytes_per_sample(c->sample_fmt);
player->audio_clock += sample_time;
LOGI(9, "player_write_audio - added")
}
enum WaitFuncRet wait_ret = player_wait_for_frame(player,
player->audio_clock + AUDIO_TIME_ADJUST_US, stream_no);
if (wait_ret == WAIT_FUNC_RET_SKIP) {
goto end;
}
LOGI(10, "player_write_audio Writing sample data")
jbyte *jni_samples = (*env)->GetByteArrayElements(env, samples_byte_array,
NULL);
memcpy(jni_samples, data, data_size);
(*env)->ReleaseByteArrayElements(env, samples_byte_array, jni_samples, 0);
LOGI(10, "player_write_audio playing audio track");
ret = (*env)->CallIntMethod(env, player->audio_track,
player->audio_track_write_method, samples_byte_array, 0, data_size);
jthrowable exc = (*env)->ExceptionOccurred(env);
if (exc) {
err = -ERROR_PLAYING_AUDIO;
LOGE(3, "Could not write audio track: reason in exception");
// TODO maybe release exc
goto free_local_ref;
}
if (ret < 0) {
err = -ERROR_PLAYING_AUDIO;
LOGE(3,
"Could not write audio track: reason: %d look in AudioTrack.write()", ret);
goto free_local_ref;
}
free_local_ref:
LOGI(10, "player_write_audio releasing local ref");
(*env)->DeleteLocalRef(env, samples_byte_array);
end: return err;}
I will be pleased for any help !!!! Thank you very much !!!!