
Recherche avancée
Autres articles (97)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (10197)
-
FFmpeg : Get better encoding out of my function
31 octobre 2019, par A PersonI needed some assistance on my task.
I am using FFmpeg to burn time and the channel name onto the video.
My goal is to record a stream that is html5 compatible with the following settings :
Video wrapper MP4
Video codec H.264
Bitrate 1Mbps
Audio codec AAC
Audio bitrate 128Kbps
And
GPU encoding
.This is what I am using :
ffmpeg -hwaccel cuvid -y -i {udp} -vf "drawtext=fontfile=calibrib.tff:fontsize=25:text='{ChannelName} %{localtime}': x=10: y=10: fontcolor=white: box=1: boxcolor=0x000000" -pix_fmt yuv420p -vsync 1 -c:v h264_nvenc -r 25 -threads 0 -b:v 1M -profile:v main -minrate 1M -maxrate 1M -bufsize 10M -sc_threshold 0 -c:a aac -b:a 128k -ac 2 -ar 44100 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -bsf:v h264_mp4toannexb -t 00:30:00 {output}\{ChannelName}\{ChannelName}_{year}_{monthno}_{day}__{Hours}_{Minutes}_{Seconds}.mp4
{ChannelName}_{year}_{monthno}_{day}__{Hours}_{Minutes}_{Seconds}
are all variable holding information.{udp}
holds the UDP stream link.I have done it this way as I have multiple UDP stream recording.
Although this works, is there a better way for me to do this keeping in the
-vf
as I need the time and channel name.Currently, this uses between 0.8% to 1.9% GPU on my Quadro P4000. I don’t want to use more than this as I have more than 30 streams.
-
avformat/matroskaenc : Don't use size of inexistent Cluster
22 janvier 2020, par Andreas Rheinhardtavformat/matroskaenc : Don't use size of inexistent Cluster
In order to determine whether the current Cluster needs to be closed
because of the limits on clustersize and clustertime,
mkv_write_packet() would first get the size of the current Cluster by
applying avio_tell() on the dynamic buffer holding the current Cluster.
It did this without checking whether there is a dynamic buffer for
writing Clusters open right now.In this case (which happens when writing the first packet)
avio_tell() returned AVERROR(EINVAL) ; yet it is not good to rely on
avio_tell() (or actually, avio_seek()) to handle the situation
gracefully.Fixing this is easy : Only check whether a Cluster needs to be closed
if a Cluster is in fact open.Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
How do I use FFMPEG/libav to access the data in individual audio samples ?
15 octobre 2022, par BreadsnshredsThe end result is I'm trying to visualise the audio waveform to use in a DAW-like software. So I want to get each sample's value and draw it. With that in mind, I'm currently stumped by trying to gain access to the values stored in each sample. For the time being, I'm just trying to access the value in the first sample - I'll build it into a loop once I have some working code.


I started off by following the code in this example. However, LibAV/FFMPEG has been updated since then, so a lot of the code is deprecated or straight up doesn't work the same anymore.


Based on the example above, I believe the logic is as follows :


- 

- get the formatting info of the audio file
- get audio stream info from the format
- check that the codec required for the stream is an audio codec
- get the codec context (I think this is info about the codec) - This is where it gets kinda confusing for me
- create an empty packet and frame to use - packets are for holding compressed data and frames are for holding uncompressed data
- the format reads the first frame from the audio file into our packet
- pass that packet into the codec context to be decoded
- pass our frame to the codec context to receive the uncompressed audio data of the first frame
- create a buffer to hold the values and try allocating samples to it from our frame




















From debugging my code, I can see that step 7 succeeds and the packet that was empty receives some data. In step 8, the frame doesn't receive any data. This is what I need help with. I get that if I get the frame, assuming a stereo audio file, I should have two samples per frame, so really I just need your help to get uncompressed data into the frame.


I've scoured through the documentation for loads of different classes and I'm pretty sure I'm using the right classes and functions to achieve my goal, but evidently not (I'm also using Qt, so I'm using qDebug throughout, and QString to hold the URL for the audio file as path). So without further ado, here's my code :


// Step 1 - get the formatting info of the audio file
 AVFormatContext* format = avformat_alloc_context();
 if (avformat_open_input(&format, path.toStdString().c_str(), NULL, NULL) != 0) {
 qDebug() << "Could not open file " << path;
 return -1;
 }

// Step 2 - get audio stream info from the format
 if (avformat_find_stream_info(format, NULL) < 0) {
 qDebug() << "Could not retrieve stream info from file " << path;
 return -1;
 }

// Step 3 - check that the codec required for the stream is an audio codec
 int stream_index =- 1;
 for (unsigned int i=0; inb_streams; i++) {
 if (format->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
 stream_index = i;
 break;
 }
 }

 if (stream_index == -1) {
 qDebug() << "Could not retrieve audio stream from file " << path;
 return -1;
 }

// Step 4 -get the codec context
 const AVCodec *codec = avcodec_find_decoder(format->streams[stream_index]->codecpar->codec_id);
 AVCodecContext *codecContext = avcodec_alloc_context3(codec);
 avcodec_open2(codecContext, codec, NULL);

// Step 5 - create an empty packet and frame to use
 AVPacket *packet = av_packet_alloc();
 AVFrame *frame = av_frame_alloc();

// Step 6 - the format reads the first frame from the audio file into our packet
 av_read_frame(format, packet);
// Step 7 - pass that packet into the codec context to be decoded
 avcodec_send_packet(codecContext, packet);
//Step 8 - pass our frame to the codec context to receive the uncompressed audio data of the first frame
 avcodec_receive_frame(codecContext, frame);

// Step 9 - create a buffer to hold the values and try allocating samples to it from our frame
 double *buffer;
 av_samples_alloc((uint8_t**) &buffer, NULL, 1, frame->nb_samples, AV_SAMPLE_FMT_DBL, 0);
 qDebug () << "packet: " << &packet;
 qDebug() << "frame: " << frame;
 qDebug () << "buffer: " << buffer;



For the time being, step 9 is incomplete as you can probably tell. But for now, I need help with step 8. Am I missing a step, using the wrong function, wrong class ? Cheers.