Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (31)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (2947)

  • ERROR : "Cannot Find FFMPEG" on Google Cloud Compute Engine Debian Wheezy 7.8 Managed Instance even though it's installed

    17 mai 2021, par DynamoBooster

    I wrote a Node.JS application that uses the fluent-ffmpeg module to watermark videos uploaded on the platform. I pushed the code to a my Google Cloud Compute Engine project, and every time I get Error : Cannot Find FFMPEG. I ssh'd into the instance once it was created and ran these commands to install FFMPEG before actually testing out the code. I am not sure what is causing the error because after this I am positive that FFMPEG is installed.

    



    sudo apt-get update
sudo apt-get install -y ffmpeg
export FFMPEG_PATH="/usr/bin/ffmpeg"
export FFPROBE_PATH="/usr/bin/ffprobe"


    



    Below is my FFMPEG code

    



    function generate_thumbnail(name, path){
  logging.info("Generating Thumbnail");
  ffmpeg(path)
   .setFfmpegPath('/usr/bin/ffmpeg') 
   .setFfprobePath('/usr/bin/ffprobe')
   .on('end', function() {
        upload_thumbnail(name);
        logging.info("Thumbnail Generated and uploaded");
        return;
    })
  .on('error', function(err, stdout, stderr) {
        logging.info('ERROR: ' + err.message);
        logging.info('STDERR:' + stderr);
  })
  .on('start', function(commandLine) {
       logging.info(commandLine);
  })
  .screenshots({
    count: 1,
    filename: name + '_thumbnail.png',
    folder: 'public/images/thumbnails/'
  });
}


    


  • What google cloud service can be used to process files stored in Firebase Cloud Storage with FFmpeg ? [closed]

    1er mai 2021, par uponly

    I am building a ReactJs app and I am trying to figure out a way to process files (images, videos, and audio of any type) that are stored in my Firebase storage bucket using FFmpeg. Currently, I have set up the functionality for allowing the user to upload files to my storage bucket, and a corresponding URL link is stored in a document in Firestore.

    


    Ideally, I'd love to do this with Cloud Functions HTTP triggers because I have all of that setup already. It would be nice to just call an HTTP trigger to process the file after it has been uploaded. However, after a bit of research, my current understanding is to somehow deploy my app using a flexible Google App Engine environment because apparently, it is the only way to set a manual timeout in case I have to process a long, high-quality video, for example. Thus I wouldn't be able to use Cloud Functions because there is a very short timeout period which may lead to the files not being fully processed.

    


    Here is the user flow I am trying to achieve, which may help make things more clear :

    


      

    1. [Done] The user uploads a file (image, audio, or video) to Firebase cloud storage. A URL is also stored in their corresponding user document in Firestore.
    2. 


    3. [Here and the steps onward are what I am trying to achieve] After the file has been stored, I'd like to kick off some sort of function that grabs the newly stored file and begin to process it in the cloud.
    4. 


    5. Store the newly processed file back into the Cloud Storage bucket
    6. 


    7. Allow the user to preview the processed file (by streaming it ideally, if possible).
    8. 


    


    In steps 2 and onward, I am just generally confused about what Google service I should be using to process my file in the cloud with FFmpeg. As well as how I can connect it to my React app, client-side. If I have to go the Google App Engine route, how do I go about connecting app engine to my React App such that I don't have to build my app and deploy it, as my app is still in development ?

    


    This is not a coding question so I apologize if this is the wrong place to post in. I am new to all this, any and all help is greatly appreciated. Thank you.

    


  • Google cloud speech to text not giving output for OGG & MP3 files

    27 avril 2021, par Vedant Jumle

    I am trying to perform speech to text on a bunch of audio files which are over 10 mins long. I don't want to waste storage on the cloud bucket by straight-up uploading wav files on it. So I am using ffmpeg to convert the files either to ogg or mp3 like :
ffmpeg -y -i audio.wav -ar 12000 -r 16000 audio.mp3

    


    ffmpeg -y -i audio.wav -ar 12000 -r 16000 audio.ogg

    


    For testing purpose I ran the speech to text service on a dummy wav file and it seemed to work, I got the text as expected. But for some reason it isn't detecting any speech when I use the ogg or mp3 file. I could not give amr files to work either.

    


    My code :

    


    def transcribe_gcs(gcs_uri):
    client = speech.SpeechClient()

    audio = speech.RecognitionAudio(uri=gcs_uri)
    config = speech.RecognitionConfig(
        encoding="OGG_OPUS", #replace with "LINEAR16" for wav, "OGG_OPUS" for ogg, "AMR" for amr
        sample_rate_hertz=16000,
        language_code="en-US",
    )
    print("starting operation")
    operation = client.long_running_recognize(config=config, audio=audio)
    response = operation.result()
    print(response)


    


    I have set up the authentication properly, so that is not a problem.

    


    When I run the speech to text service on the same audio but in ogg or mp3(I just comment out the encoding setting from the config for mp3) format, it gives no response, just prints out a line break and done.

    


    What can I do to fix this ?