Recherche avancée

Médias (1)

Mot : - Tags -/copyleft

Autres articles (95)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (7462)

  • Android Encode h264 using libavcodec for ARGB

    12 décembre 2013, par nmxprime

    I have a stream of buffer content which actually contains 480x800 sized ARGB image[byte array of size 480*800*4]. i want to encode around 10,000s of similar images into a stream of h.264 at specified fps(12). this shows how to encode images into encoded video,but requires input to be yuv420.

    Now i have ARGB images, i want to encode into CODEC_ID_H264
    How to convert RGB from YUV420p for ffmpeg encoder ? shows how to do it for rgb24, but how to do it for rgb32,meaning ARGB image data

    how do i use libavcodec for this ?

    EDIT : i found How to convert RGB from YUV420p for ffmpeg encoder ?
    But i don't understand.

    From the 1st link, i come to know that AVFrame struct contains data[0],data1,data[2] which are filled with Y, U & V values.

    In 2nd link, they showed how to use sws_scale to convert RGB24 to YUV420 as such

    SwsContext * ctx = sws_getContext(imgWidth, imgHeight,
                                 AV_PIX_FMT_RGB24, imgWidth, imgHeight,
                                 AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
    uint8_t * inData[1] = { rgb24Data }; // RGB24 have one plane
    int inLinesize[1] = { 3*imgWidth }; // RGB stride
    sws_scale(ctx, inData, inLinesize, 0, imgHeight, dst_picture.data, dst_picture.linesize)

    Here i assume that rgb24Data is the buffer containing RGB24 image bytes.

    So how i use this information for ARGB, which is 32 bit ? Do i need manually to strip-off the alpha channel or any other work around ?

    Thank you

  • pts and dts problems while encoding multiple streams to AVFormatContext with libavcodec and libavformat

    20 novembre 2022, par WalleyM

    I am trying to encode a mpeg2video stream and a signed PCM 32 bit audio stream to a .mov file using ffmpeg's avcodec and avformat libraries.

    


    My video stream is set up in almost the exact same way as is described here with my audio stream being set up in a very similar way.

    


    My time_base for both audio and video is set to 1/fps.

    


    Here is the overview output from setting up the encoder :

    


    


    Output #0, mov, to ' /Recordings/SDI_Video.mov' :
    
Metadata :
    
encoder : Lavf59.27.100
    
Stream #0:0 : Video : mpeg2video (m2v1 / 0x3176326D), yuv420p, 1920x1080, q=2-31, 207360 kb/s, 90k tbn
    
Stream #0:1 : Audio : pcm_s32be (in32 / 0x32336E69), 48000 Hz, stereo, s32, 3072 kb/s

    


    


    As I understand it my pts should be when the frame is presented while dts should be when the frame is decoded. This means that audio and video frame pts should be the same whereas dts should be incremental between them.

    


    Essentially meaning interleaved audio and video frames should be in the following pts and dts order :

    


    pts 112233
dts 123456

    


    I am using this format to set my pts and dts :

    


    videoFrame->pts = frameCounter;
    
if(avcodec_send_frame(videoContext, videoFrame) < 0)
{
    std::cout << "Failed to send video frame " << frameCounter << std::endl;
    return;
}
    
AVPacket videoPkt;
av_init_packet(&videoPkt);
videoPkt.data = nullptr;
videoPkt.size = 0;
videoPkt.flags |= AV_PKT_FLAG_KEY;
videoPkt.stream_index = 0;
videoPkt.dts = frameCounter * 2;
    
if(avcodec_receive_packet(videoContext, &videoPkt) == 0)
{
    av_interleaved_write_frame(outputFormatContext, &videoPkt);
    av_packet_unref(&videoPkt);
}


    


    With audio the same except :

    


    audioPkt.stream_index = 1;
audioPkt.dts = frameCounter * 2 + 1;


    


    However, I still get problems with my dts setting shown in this output :

    


    


    [mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 1 >= 0
    
[mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 2 >= 1
    
[mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 3 >= 2

    


    


    I would like to fix this issue.

    


  • Discord.js voice stop playing audio after 10 consecutive files

    29 avril 2021, par Spiralio

    I am trying to do the simple task of playing a single MP3 file when a command is run. The file is stored locally, and I have FFmpeg installed on my computer. The code below is part of my command's file :

    


    const Discord = require("discord.js");
const fs = require('fs');
const { Client, RichEmbed } = require('discord.js');
const config = require("../config.json");

let playing = undefined;
let connection = undefined;

module.exports.run = async (client, message, args, config) => {


  if (playing) playing.end()
  if (connection == undefined) await message.member.voice.channel.join().then((c) => {
    connection = c;
  })
  playing = connection.play('./sounds/sound.mp3')

}


    


    (note that this code is heavily narrowed down to single out the issue)

    


    When I run the command the first 9 times, it works perfectly - the file is played, and cuts off if it is already playing. I also want to note that the file is 2 minutes long. However, once I play the file for exactly the 10th time, the bot stops playing audio entirely - as long as all 10 times are overlapping (meaning I don't let the audio finish).

    


    What's more confusing is that if an error is passed after the bot stops playing audio, it appears in an entirely different format than the standard Discord.js errors. For example, this code does not test to see if the user is in a voice channel, so if I purposefully crash the bot by initiating the command without being in a voice channel (after running the command 10 times), the error looks like this :

    


    abort(RangeError: offset is out of bounds). Build with -s ASSERTIONS=1 for more info.
(Use `electron --trace-uncaught ...` to show where the exception was thrown)


    


    (Preceded by a bunch of unformatted code) This however, is not consistent. It seems to only appear after letting the files run entirely.

    


    The issue only fixes itself when the entire bot restarts. Any help would be appreciated.