Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (111)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (10307)

  • How Can I Configure Storybook to Use React-App-Rewired ?

    8 août 2022, par joseph

    I'm working on a project that implements react-app-rewired to send headers to the server in order to bypass ReferenceError: SharedArrayBuffer is not defined (I'm getting this error from using the @ffmpeg/ffmpeg library).

    


    // config-overrides.js
const {
  override,
  // disableEsLint,
  // addBabelPlugins,
  // overrideDevServer
} = require('customize-cra')

module.exports = {
  devServer(configFunction) {
    // eslint-disable-next-line func-names
    return function (proxy, allowedHost) {
      const config = configFunction(proxy, allowedHost)

      // Set loose allow origin header to prevent CORS issues
      config.headers = {
        'Access-Control-Allow-Origin': '*',
        'Cross-Origin-Opener-Policy': 'same-origin',
        'Cross-Origin-Embedder-Policy': 'require-corp',
        'Cross-Origin-Resource-Policy': 'cross-origin'
      }

      return config
    }
  }
}


    


    // package.json
"scripts": {
  "start": "react-app-rewired start",
  "build": "react-app-rewired build",
  "test": "react-app-rewired test  --transformIgnorePatterns \"node_modules/(?!siriwave)/\"",
  "eject": "react-scripts eject",
  "storybook": "start-storybook -p 6006 -s public",
  "build-storybook": "build-storybook -s public"
}


    


    Though this works when I run npm start, meaning the headers get sent to the server, it doesn't work when I run npm run storybook, and I still get the SharedArrayBuffer is not defined error. I'm assuming it's because npm run storybook still uses react-scripts as opposed to react-app-rewired under the hood, but I'm not sure where I can change the configurations for this. Any ideas ?

    


  • Discord.js voice stop playing audio after 10 consecutive files

    29 avril 2021, par Spiralio

    I am trying to do the simple task of playing a single MP3 file when a command is run. The file is stored locally, and I have FFmpeg installed on my computer. The code below is part of my command's file :

    


    const Discord = require("discord.js");
const fs = require('fs');
const { Client, RichEmbed } = require('discord.js');
const config = require("../config.json");

let playing = undefined;
let connection = undefined;

module.exports.run = async (client, message, args, config) => {


  if (playing) playing.end()
  if (connection == undefined) await message.member.voice.channel.join().then((c) => {
    connection = c;
  })
  playing = connection.play('./sounds/sound.mp3')

}


    


    (note that this code is heavily narrowed down to single out the issue)

    


    When I run the command the first 9 times, it works perfectly - the file is played, and cuts off if it is already playing. I also want to note that the file is 2 minutes long. However, once I play the file for exactly the 10th time, the bot stops playing audio entirely - as long as all 10 times are overlapping (meaning I don't let the audio finish).

    


    What's more confusing is that if an error is passed after the bot stops playing audio, it appears in an entirely different format than the standard Discord.js errors. For example, this code does not test to see if the user is in a voice channel, so if I purposefully crash the bot by initiating the command without being in a voice channel (after running the command 10 times), the error looks like this :

    


    abort(RangeError: offset is out of bounds). Build with -s ASSERTIONS=1 for more info.
(Use `electron --trace-uncaught ...` to show where the exception was thrown)


    


    (Preceded by a bunch of unformatted code) This however, is not consistent. It seems to only appear after letting the files run entirely.

    


    The issue only fixes itself when the entire bot restarts. Any help would be appreciated.

    


  • pts and dts problems while encoding multiple streams to AVFormatContext with libavcodec and libavformat

    20 novembre 2022, par WalleyM

    I am trying to encode a mpeg2video stream and a signed PCM 32 bit audio stream to a .mov file using ffmpeg's avcodec and avformat libraries.

    


    My video stream is set up in almost the exact same way as is described here with my audio stream being set up in a very similar way.

    


    My time_base for both audio and video is set to 1/fps.

    


    Here is the overview output from setting up the encoder :

    


    


    Output #0, mov, to ' /Recordings/SDI_Video.mov' :
    
Metadata :
    
encoder : Lavf59.27.100
    
Stream #0:0 : Video : mpeg2video (m2v1 / 0x3176326D), yuv420p, 1920x1080, q=2-31, 207360 kb/s, 90k tbn
    
Stream #0:1 : Audio : pcm_s32be (in32 / 0x32336E69), 48000 Hz, stereo, s32, 3072 kb/s

    


    


    As I understand it my pts should be when the frame is presented while dts should be when the frame is decoded. This means that audio and video frame pts should be the same whereas dts should be incremental between them.

    


    Essentially meaning interleaved audio and video frames should be in the following pts and dts order :

    


    pts 112233
dts 123456

    


    I am using this format to set my pts and dts :

    


    videoFrame->pts = frameCounter;
    
if(avcodec_send_frame(videoContext, videoFrame) < 0)
{
    std::cout << "Failed to send video frame " << frameCounter << std::endl;
    return;
}
    
AVPacket videoPkt;
av_init_packet(&videoPkt);
videoPkt.data = nullptr;
videoPkt.size = 0;
videoPkt.flags |= AV_PKT_FLAG_KEY;
videoPkt.stream_index = 0;
videoPkt.dts = frameCounter * 2;
    
if(avcodec_receive_packet(videoContext, &videoPkt) == 0)
{
    av_interleaved_write_frame(outputFormatContext, &videoPkt);
    av_packet_unref(&videoPkt);
}


    


    With audio the same except :

    


    audioPkt.stream_index = 1;
audioPkt.dts = frameCounter * 2 + 1;


    


    However, I still get problems with my dts setting shown in this output :

    


    


    [mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 1 >= 0
    
[mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 2 >= 1
    
[mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 3 >= 2

    


    


    I would like to fix this issue.