
Recherche avancée
Médias (91)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (111)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...)
Sur d’autres sites (8640)
-
How to convert same audio twice using libswresamples's swr_convert
25 juillet 2019, par JoshuaCWebDeveloperI’m working on an audio processing system that sometimes requires that the same audio be resampled twice. The first resampling of the audio from FFmpeg works fine, the second results in distorted audio. I’ve reproduced this problem by modifying the
resampling_audio
example provided by FFmpeg. How do I convert the same audio twice usingswr_convert
?Below I’ve attached a modified version of the
resampling_audio
example. In order to reproduce the issue, follow these steps :- Clone FFmepg project at https://github.com/FFmpeg/FFmpeg
- Run
./configure
- Run
make -j4 examples
(this will take awhile the first time) - Run
doc/examples/resampling_audio
to produce expected output - Replace
doc/examples/resampling_audio.c
with the version I’ve attached below - Run
make -j4 examples
- Run
doc/examples/resampling_audio
again (with new args) to output two new files (one for each conversion). - Import each file into Audacity as raw data, the first file should be 44100 Hz, the second should be 32000 Hz.
- The first file will sound the same as the original, the second file will be distorted.
The environment I ran this in was Ubuntu 16.04 ; I then copied the output files to a Windows PC to open them in Audacity.
Here is my modified
resampling_audio.c
file. I’ve created some new variables and copied the blocks of code that do the conversion. The first conversion should be unchanged, the second conversion takes in data from the first conversion and attempts to convert it again./*
* Copyright (c) 2012 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @example resampling_audio.c
* libswresample API use example.
*/
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libswresample></libswresample>swresample.h>
static int get_format_from_sample_fmt(const char **fmt,
enum AVSampleFormat sample_fmt)
{
int i;
struct sample_fmt_entry {
enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;
} sample_fmt_entries[] = {
{ AV_SAMPLE_FMT_U8, "u8", "u8" },
{ AV_SAMPLE_FMT_S16, "s16be", "s16le" },
{ AV_SAMPLE_FMT_S32, "s32be", "s32le" },
{ AV_SAMPLE_FMT_FLT, "f32be", "f32le" },
{ AV_SAMPLE_FMT_DBL, "f64be", "f64le" },
};
*fmt = NULL;
for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {
struct sample_fmt_entry *entry = &sample_fmt_entries[i];
if (sample_fmt == entry->sample_fmt) {
*fmt = AV_NE(entry->fmt_be, entry->fmt_le);
return 0;
}
}
fprintf(stderr,
"Sample format %s not supported as output format\n",
av_get_sample_fmt_name(sample_fmt));
return AVERROR(EINVAL);
}
/**
* Fill dst buffer with nb_samples, generated starting from t.
*/
static void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t)
{
int i, j;
double tincr = 1.0 / sample_rate, *dstp = dst;
const double c = 2 * M_PI * 440.0;
/* generate sin tone with 440Hz frequency and duplicated channels */
for (i = 0; i < nb_samples; i++) {
*dstp = sin(c * *t);
for (j = 1; j < nb_channels; j++)
dstp[j] = dstp[0];
dstp += nb_channels;
*t += tincr;
}
}
int main(int argc, char **argv)
{
int64_t src_ch_layout = AV_CH_LAYOUT_STEREO, dst_ch_layout = AV_CH_LAYOUT_SURROUND;
int src_rate = 48000, dst_rate = 44100;
uint8_t **src_data = NULL, **dst_data = NULL, **dst_data2 = NULL;
int src_nb_channels = 0, dst_nb_channels = 0;
int src_linesize, dst_linesize;
int src_nb_samples = 1024, dst_nb_samples, max_dst_nb_samples, dst_nb_samples2, max_dst_nb_samples2;
enum AVSampleFormat src_sample_fmt = AV_SAMPLE_FMT_DBL, dst_sample_fmt = AV_SAMPLE_FMT_S16;
const char *dst_filename = NULL, *dst_filename2 = NULL;
FILE *dst_file, *dst_file2;
int dst_bufsize, dst_bufsize2;
const char *fmt;
struct SwrContext *swr_ctx;
struct SwrContext *swr_ctx2;
double t;
int ret;
if (argc != 3) {
fprintf(stderr, "Usage: %s output_file_first output_file_second\n"
"API example program to show how to resample an audio stream with libswresample.\n"
"This program generates a series of audio frames, resamples them to a specified "
"output format and rate and saves them to an output file named output_file.\n",
argv[0]);
exit(1);
}
dst_filename = argv[1];
dst_filename2 = argv[2];
dst_file = fopen(dst_filename, "wb");
if (!dst_file) {
fprintf(stderr, "Could not open destination file %s\n", dst_filename);
exit(1);
}
dst_file2 = fopen(dst_filename2, "wb");
if (!dst_file2) {
fprintf(stderr, "Could not open destination file 2 %s\n", dst_filename2);
exit(1);
}
/* create resampler context */
swr_ctx = swr_alloc();
if (!swr_ctx) {
fprintf(stderr, "Could not allocate resampler context\n");
ret = AVERROR(ENOMEM);
goto end;
}
/* set options */
av_opt_set_int(swr_ctx, "in_channel_layout", src_ch_layout, 0);
av_opt_set_int(swr_ctx, "in_sample_rate", src_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", src_sample_fmt, 0);
av_opt_set_int(swr_ctx, "out_channel_layout", dst_ch_layout, 0);
av_opt_set_int(swr_ctx, "out_sample_rate", dst_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", dst_sample_fmt, 0);
/* initialize the resampling context */
if ((ret = swr_init(swr_ctx)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context\n");
goto end;
}
/* create resampler context 2 */
swr_ctx2 = swr_alloc();
if (!swr_ctx2) {
fprintf(stderr, "Could not allocate resampler context 2\n");
ret = AVERROR(ENOMEM);
goto end;
}
/* set options */
av_opt_set_int(swr_ctx2, "in_channel_layout", dst_ch_layout, 0);
av_opt_set_int(swr_ctx2, "in_sample_rate", dst_rate, 0);
av_opt_set_sample_fmt(swr_ctx2, "in_sample_fmt", dst_sample_fmt, 0);
av_opt_set_int(swr_ctx2, "out_channel_layout", dst_ch_layout, 0);
av_opt_set_int(swr_ctx2, "out_sample_rate", 32000, 0);
av_opt_set_sample_fmt(swr_ctx2, "out_sample_fmt", dst_sample_fmt, 0);
/* initialize the resampling context */
if ((ret = swr_init(swr_ctx2)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context 2\n");
goto end;
}
/* allocate source and destination samples buffers */
src_nb_channels = av_get_channel_layout_nb_channels(src_ch_layout);
ret = av_samples_alloc_array_and_samples(&src_data, &src_linesize, src_nb_channels,
src_nb_samples, src_sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate source samples\n");
goto end;
}
/* compute the number of converted samples: buffering is avoided
* ensuring that the output buffer will contain at least all the
* converted input samples */
max_dst_nb_samples = dst_nb_samples =
av_rescale_rnd(src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
/* buffer is going to be directly written to a rawaudio file, no alignment */
dst_nb_channels = av_get_channel_layout_nb_channels(dst_ch_layout);
ret = av_samples_alloc_array_and_samples(&dst_data, &dst_linesize, dst_nb_channels,
dst_nb_samples, dst_sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate destination samples\n");
goto end;
}
/* compute the number of converted samples: buffering is avoided
* ensuring that the output buffer will contain at least all the
* converted input samples */
max_dst_nb_samples2 = dst_nb_samples2 =
av_rescale_rnd(dst_nb_samples, 32000, dst_rate, AV_ROUND_UP);
/* buffer is going to be directly written to a rawaudio file, no alignment */
// dst_nb_channels2 = av_get_channel_layout_nb_channels(dst_ch_layout);
ret = av_samples_alloc_array_and_samples(&dst_data2, &dst_linesize, dst_nb_channels,
dst_nb_samples2, dst_sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate destination samples 2\n");
goto end;
}
t = 0;
do {
/* generate synthetic audio */
fill_samples((double *)src_data[0], src_nb_samples, src_nb_channels, src_rate, &t);
/* compute destination number of samples */
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) +
src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
if (dst_nb_samples > max_dst_nb_samples) {
av_freep(&dst_data[0]);
ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels,
dst_nb_samples, dst_sample_fmt, 1);
if (ret < 0)
break;
max_dst_nb_samples = dst_nb_samples;
}
/* convert to destination format */
ret = swr_convert(swr_ctx, dst_data, dst_nb_samples, (const uint8_t **)src_data, src_nb_samples);
if (ret < 0) {
fprintf(stderr, "Error while converting\n");
goto end;
}
dst_bufsize = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels,
ret, dst_sample_fmt, 1);
if (dst_bufsize < 0) {
fprintf(stderr, "Could not get sample buffer size\n");
goto end;
}
printf("t:%f in:%d out:%d\n", t, src_nb_samples, ret);
fwrite(dst_data[0], 1, dst_bufsize, dst_file);
/* compute destination number of samples 2 */
dst_nb_samples2 = av_rescale_rnd(swr_get_delay(swr_ctx2, dst_rate) +
dst_nb_samples2, 32000, dst_rate, AV_ROUND_UP);
if (dst_nb_samples2 > max_dst_nb_samples2) {
av_freep(&dst_data2[0]);
ret = av_samples_alloc(dst_data2, &dst_linesize, dst_nb_channels,
dst_nb_samples2, dst_sample_fmt, 1);
if (ret < 0)
break;
max_dst_nb_samples2 = dst_nb_samples2;
}
/* convert to destination format */
ret = swr_convert(swr_ctx2, dst_data2, dst_nb_samples2, (const uint8_t **)dst_data, dst_nb_samples);
if (ret < 0) {
fprintf(stderr, "Error while converting 2\n");
goto end;
}
dst_bufsize2 = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels,
ret, dst_sample_fmt, 1);
if (dst_bufsize2 < 0) {
fprintf(stderr, "Could not get sample buffer size 2\n");
goto end;
}
printf("t:%f in:%d out:%d\n", t, dst_nb_samples, ret);
fwrite(dst_data2[0], 1, dst_bufsize2, dst_file2);
} while (t < 10);
if ((ret = get_format_from_sample_fmt(&fmt, dst_sample_fmt)) < 0)
goto end;
fprintf(stderr, "Resampling succeeded. Play the output file with the command:\n"
"ffplay -f %s -channel_layout %"PRId64" -channels %d -ar %d %s\n",
fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename);
end:
fclose(dst_file);
if (src_data)
av_freep(&src_data[0]);
av_freep(&src_data);
if (dst_data)
av_freep(&dst_data[0]);
av_freep(&dst_data);
swr_free(&swr_ctx);
return ret < 0;
} -
ffmpeg in C# (ScrCpy)
23 janvier 2020, par RunicSheepI’m trying to access the screen of my android device like ScrCpy does (https://github.com/Genymobile/scrcpy) but in C#.
What I’ve done so far is pushing the jar (server) to my device and receiving the input. (Device resolution etc.)
But I can’t re implement the decoding process in c#, there has to be some sort of error so far.
C# library used for ffmpeg is ffmpeg.AutoGen (https://github.com/Ruslan-B/FFmpeg.AutoGen)Here’s the decoding code :
using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Runtime.InteropServices;
using System.Runtime.InteropServices.ComTypes;
using System.Threading;
using FFmpeg.AutoGen;
namespace Source.Android.Scrcpy
{
public unsafe class Decoder
{
private const string LD_LIBRARY_PATH = "LD_LIBRARY_PATH";
private AVFrame* _decodingFrame;
private AVCodec* _codec;
private AVCodecContext* _codec_ctx;
private AVFormatContext* _format_ctx;
public Decoder()
{
RegisterFFmpegBinaries();
SetupLogging();
this.InitFormatContext();
}
private void InitFormatContext()
{
_decodingFrame = ffmpeg.av_frame_alloc();
_codec = ffmpeg.avcodec_find_decoder(AVCodecID.AV_CODEC_ID_H264);
if (_codec== null)
{
throw new Exception("H.264 decoder not found");// run_end;
}
_codec_ctx = ffmpeg.avcodec_alloc_context3(_codec);
if (_codec_ctx == null)
{
throw new Exception("Could not allocate decoder context"); //run_end
}
if (ffmpeg.avcodec_open2(_codec_ctx, _codec, null) < 0)
{
throw new Exception("Could not open H.264 codec");// run_finally_free_codec_ctx
}
_format_ctx = ffmpeg.avformat_alloc_context();
if (_format_ctx == null)
{
throw new Exception("Could not allocate format context");// run_finally_close_codec;
}
}
private void RegisterFFmpegBinaries()
{
switch (Environment.OSVersion.Platform)
{
case PlatformID.Win32NT:
case PlatformID.Win32S:
case PlatformID.Win32Windows:
var current = Environment.CurrentDirectory;
var probe = Path.Combine("FFmpeg", Environment.Is64BitProcess ? "x64" : "x86");
while (current != null)
{
var ffmpegDirectory = Path.Combine(current, probe);
if (Directory.Exists(ffmpegDirectory))
{
Console.WriteLine($"FFmpeg binaries found in: {ffmpegDirectory}");
RegisterLibrariesSearchPath(ffmpegDirectory);
return;
}
current = Directory.GetParent(current)?.FullName;
}
break;
case PlatformID.Unix:
case PlatformID.MacOSX:
var libraryPath = Environment.GetEnvironmentVariable(LD_LIBRARY_PATH);
RegisterLibrariesSearchPath(libraryPath);
break;
}
}
private static void RegisterLibrariesSearchPath(string path)
{
switch (Environment.OSVersion.Platform)
{
case PlatformID.Win32NT:
case PlatformID.Win32S:
case PlatformID.Win32Windows:
SetDllDirectory(path);
break;
case PlatformID.Unix:
case PlatformID.MacOSX:
string currentValue = Environment.GetEnvironmentVariable(LD_LIBRARY_PATH);
if (string.IsNullOrWhiteSpace(currentValue) == false && currentValue.Contains(path) == false)
{
string newValue = currentValue + Path.PathSeparator + path;
Environment.SetEnvironmentVariable(LD_LIBRARY_PATH, newValue);
}
break;
}
}
[DllImport("kernel32", SetLastError = true)]
private static extern bool SetDllDirectory(string lpPathName);
private AVPacket GetPacket()
{
var packet = ffmpeg.av_packet_alloc();
ffmpeg.av_init_packet(packet);
packet->data = null;
packet->size = 0;
return *packet;
}
private static int read_raw_packet(void* opaque, ushort* buffer, int bufSize)
{
var buffSize = 1024;
var remaining = dt.Length - dtp - 1;
var written = 0;
for (var i = 0; i < buffSize && i+dtp < dt.Length; i++)
{
buffer[i] = dt[i+dtp];
written++;
}
dtp += written;
if (written <= 0)
{
return ffmpeg.AVERROR_EOF;
}
return written;
}
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
public delegate int av_read_function_callback(void* opaque, ushort* endData, int bufSize);
private static byte[] dt;
private static int dtp;
public Bitmap DecodeScrCpy(byte[] data)
{
if (data.Length == 0)
{
return null;
}
byte* _buffer;
ulong _bufferSize = 1024*2;
_buffer = (byte*)ffmpeg.av_malloc(_bufferSize);
if (_buffer == null)
{
throw new Exception("Could not allocate buffer"); // run_finally_free_format_ctx;
}
fixed (byte* dataPtr = data)
{
dt = data;
dtp = 0;
fixed (AVFormatContext** formatCtxPtr = &_format_ctx)
{
var mReadCallbackFunc = new av_read_function_callback(read_raw_packet);
AVIOContext* avio_ctx = ffmpeg.avio_alloc_context(_buffer, (int)_bufferSize, 0, null, //TODO: IntPtr.Zero nutzen?
new avio_alloc_context_read_packet_func{Pointer = Marshal.GetFunctionPointerForDelegate(mReadCallbackFunc) },
null, null);
if (avio_ctx == null)
{
ffmpeg.av_free(dataPtr);
throw new Exception("Could not allocate avio context"); //goto run_finally_free_format_ctx;
}
_format_ctx->pb = avio_ctx;
if (ffmpeg.avformat_open_input(formatCtxPtr, null, null, null) < 0)
{
throw new Exception("Could not open video stream"); // goto run_finally_free_avio_ctx;
}
var packet = GetPacket();
while (ffmpeg.av_read_frame(_format_ctx, &packet) == 0)
{
if (ffmpeg.LIBAVDEVICE_VERSION_INT >= ffmpeg.AV_VERSION_INT(57, 37, 0))
{
int ret;
if ((ret = ffmpeg.avcodec_send_packet(_codec_ctx, &packet)) < 0)
{
throw new Exception($"Could not send video packet: {ret}"); //goto run_quit
}
ret = ffmpeg.avcodec_receive_frame(_codec_ctx, _decodingFrame);
if (ret == 0)
{
// a frame was received
}
else if (ret != ffmpeg.AVERROR(ffmpeg.EAGAIN))
{
ffmpeg.av_packet_unref(&packet);
throw new Exception($"Could not receive video frame: {ret}"); //goto run_quit;
}
}
}
}
}
return null;
}
}
}Entry is DecodeScrCpy with read data from network stream.
Things I noticed :- read_raw_packet is called again after ffmpeg.AVERROR_EOF is returned
- ffmpeg.avformat_open_input fails
Question is, did I miss anything ?
-
lavu/sha : Fully unroll the transform function loops
10 septembre 2013, par James Almerlavu/sha : Fully unroll the transform function loops
crypto_bench SHA-1 and SHA-256 results using an AMD Athlon X2 7750+, mingw32-w64 GCC 4.7.3 x86_64
Before :
lavu SHA-1 size : 1048576 runs : 1024 time : 9.012 +- 0.162
lavu SHA-256 size : 1048576 runs : 1024 time : 19.625 +- 0.173After :
lavu SHA-1 size : 1048576 runs : 1024 time : 7.948 +- 0.154
lavu SHA-256 size : 1048576 runs : 1024 time : 17.841 +- 0.170Signed-off-by : James Almer <jamrial@gmail.com>
Signed-off-by : Michael Niedermayer <michaelni@gmx.at>