
Advanced search
Other articles (35)
-
Support audio et vidéo HTML5
10 April 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Taille des images et des logos définissables
9 February 2011, byDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Gestion de la ferme
2 March 2010, byLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
On other websites (5469)
-
Discord.js voice stop playing audio after 10 consecutive files
29 April 2021, by SpiralioI am trying to do the simple task of playing a single MP3 file when a command is run. The file is stored locally, and I have FFmpeg installed on my computer. The code below is part of my command's file:


const Discord = require("discord.js");
const fs = require('fs');
const { Client, RichEmbed } = require('discord.js');
const config = require("../config.json");

let playing = undefined;
let connection = undefined;

module.exports.run = async (client, message, args, config) => {


 if (playing) playing.end()
 if (connection == undefined) await message.member.voice.channel.join().then((c) => {
 connection = c;
 })
 playing = connection.play('./sounds/sound.mp3')

}



(note that this code is heavily narrowed down to single out the issue)


When I run the command the first 9 times, it works perfectly - the file is played, and cuts off if it is already playing. I also want to note that the file is 2 minutes long. However, once I play the file for exactly the 10th time, the bot stops playing audio entirely - as long as all 10 times are overlapping (meaning I don't let the audio finish).


What's more confusing is that if an error is passed after the bot stops playing audio, it appears in an entirely different format than the standard Discord.js errors. For example, this code does not test to see if the user is in a voice channel, so if I purposefully crash the bot by initiating the command without being in a voice channel (after running the command 10 times), the error looks like this:


abort(RangeError: offset is out of bounds). Build with -s ASSERTIONS=1 for more info.
(Use `electron --trace-uncaught ...` to show where the exception was thrown)



(Preceded by a bunch of unformatted code) This however, is not consistent. It seems to only appear after letting the files run entirely.


The issue only fixes itself when the entire bot restarts. Any help would be appreciated.


-
pts and dts problems while encoding multiple streams to AVFormatContext with libavcodec and libavformat
20 November 2022, by WalleyMI am trying to encode a mpeg2video stream and a signed PCM 32 bit audio stream to a .mov file using ffmpeg's avcodec and avformat libraries.


My video stream is set up in almost the exact same way as is described here with my audio stream being set up in a very similar way.


My time_base for both audio and video is set to 1/fps.


Here is the overview output from setting up the encoder:




Output #0, mov, to ' /Recordings/SDI_Video.mov':

Metadata:

encoder : Lavf59.27.100

Stream #0:0: Video: mpeg2video (m2v1 / 0x3176326D), yuv420p, 1920x1080, q=2-31, 207360 kb/s, 90k tbn

Stream #0:1: Audio: pcm_s32be (in32 / 0x32336E69), 48000 Hz, stereo, s32, 3072 kb/s



As I understand it my pts should be when the frame is presented while dts should be when the frame is decoded. This means that audio and video frame pts should be the same whereas dts should be incremental between them.


Essentially meaning interleaved audio and video frames should be in the following pts and dts order:


pts 112233
dts 123456


I am using this format to set my pts and dts:


videoFrame->pts = frameCounter;
 
if(avcodec_send_frame(videoContext, videoFrame) < 0)
{
 std::cout << "Failed to send video frame " << frameCounter << std::endl;
 return;
}
 
AVPacket videoPkt;
av_init_packet(&videoPkt);
videoPkt.data = nullptr;
videoPkt.size = 0;
videoPkt.flags |= AV_PKT_FLAG_KEY;
videoPkt.stream_index = 0;
videoPkt.dts = frameCounter * 2;
 
if(avcodec_receive_packet(videoContext, &videoPkt) == 0)
{
 av_interleaved_write_frame(outputFormatContext, &videoPkt);
 av_packet_unref(&videoPkt);
}



With audio the same except:


audioPkt.stream_index = 1;
audioPkt.dts = frameCounter * 2 + 1;



However, I still get problems with my dts setting shown in this output:




[mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 1 >= 0

[mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 2 >= 1

[mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 3 >= 2



I would like to fix this issue.


-
Android Encode h264 using libavcodec for ARGB
12 December 2013, by nmxprimeI have a stream of buffer content which actually contains 480x800 sized ARGB image[byte array of size 480*800*4]. i want to encode around 10,000s of similar images into a stream of h.264 at specified fps(12). this shows how to encode images into encoded video,but requires input to be yuv420.
Now i have ARGB images, i want to encode into CODEC_ID_H264
How to convert RGB from YUV420p for ffmpeg encoder? shows how to do it for rgb24, but how to do it for rgb32,meaning ARGB image datahow do i use libavcodec for this?
EDIT: i found How to convert RGB from YUV420p for ffmpeg encoder?
But i don't understand.From the 1st link, i come to know that AVFrame struct contains data[0],data1,data[2] which are filled with Y, U & V values.
In 2nd link, they showed how to use sws_scale to convert RGB24 to YUV420 as such
SwsContext * ctx = sws_getContext(imgWidth, imgHeight,
AV_PIX_FMT_RGB24, imgWidth, imgHeight,
AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
uint8_t * inData[1] = { rgb24Data }; // RGB24 have one plane
int inLinesize[1] = { 3*imgWidth }; // RGB stride
sws_scale(ctx, inData, inLinesize, 0, imgHeight, dst_picture.data, dst_picture.linesize)Here i assume that rgb24Data is the buffer containing RGB24 image bytes.
So how i use this information for ARGB, which is 32 bit? Do i need manually to strip-off the alpha channel or any other work around ?
Thank you