Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (42)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (3568)

  • Performance Evaluation of RTX 3080 10G in ffmpeg Transcoding

    26 juin 2023, par JoeLin

    My GPU is RTX 3080 10G ,in ffmpeg , command is :

    


    ffmpeg -loglevel level+info
    
-n -hide_banner -hwaccel cuda -hwaccel_device 0 -hwaccel_output_format cuda
    
-i test.mkv
    
-map 0
    
-c:a copy -preset slow -g 50 -bf 2 -rc:v vbr -cq:v 20 -c:v : h264_nvenc -b:v : 3500k -maxrate:v:0 3500k -bufsize:v:0 7000k -map a:0 -var_stream_map "v:0,a:0,name:1080"
    
"/data/joe/speed/1080_test.mp4"
    
-benchmark

    


    My video file is 3.5G and it takes 27 minutes to execute this command. Can you please tell me if this is within a reasonable range ? By checking the logs, I found that the speed is 7.0x. I would like to know how efficient the transcoding capability of RTX 3080 is and if there are any GPUs with a similar price range that offer better transcoding performance. Alternatively, could there be an issue with my command parameters ? Thank you for your help, guys !

    


    I haven't found similar documentation, so I'm unsure if it's due to an issue with my command

    


  • checkasm/arm : preserve the stack alignment checkasm_checked_call

    12 juillet 2016, par Janne Grunau
    checkasm/arm : preserve the stack alignment checkasm_checked_call
    

    The stack used by checkasm_checked_call_vfp was a multiple of 4 when the
    checked function is called. AAPCS requires a double word (8 byte)
    aligned stack public interfaces. Since both calls are public interfaces
    the stack is misaligned when the checked is called.

    Might fix the SIGBUS error in the armv7-linux-clang-3.7 fate config.

    • [DBH] tests/checkasm/arm/checkasm.S
  • Extracting audio from video using fluent-ffmpeg

    12 mars, par Idi Favour

    Im trying to extract the audio from a video, an error is occuring
Error converting file : Error : ffmpeg exited with code 234 : Error opening output file ./src/videos/output-audio.mp3.
Error opening output files : Invalid argument

    


    I use this same directory for my video compression that runs before this one and it works.

    


    ffmpeg()
  .input(url)
  .audioChannels(0)
  .format("mp3")
  .output("./src/videos/output-audio.mp3")
  .on("error", (err) => console.error(`Error converting file: ${err}`))
  .on("end", async () => {
    console.log("audio transcripts");
   
    const stream = fs.createReadStream("./src/videos/output-audio.mp3");
    const transcription = await openai.audio.transcriptions.create({
      file: stream,
      model: "whisper-1",
      response_format: "verbose_json",
      timestamp_granularities: ["word"],
    });
    transcripts = transcription.text;
    console.log(transcription.text);
  })
  .run();