Recherche avancée

Médias (33)

Mot : - Tags -/creative commons

Autres articles (67)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (10398)

  • Use ffmpeg to upload directly to s3 ?

    12 septembre 2012, par user1204384

    I have PHP script that I am using to convert files to the mp3 extension. First the user uploads the file to an EC2 server. From there, can I use ffmpeg to process a file and directly upload it to S3 ?

    How do I upload a file, process/convert with ffmpeg and upload to an s3 bucket ?

  • AWS Lambda for generate thumbnail

    30 octobre 2019, par Milousel

    I need to create aws lambda for creating thumbnail from video by using ffmpeg, which is saved on S3 and this thumbnail saved into S3 too.

    I downloaded ffmpeg from https://johnvansickle.com/ffmpeg/ page, set ffmpeg into nodejs file and send it into .zip. From this zip file I created ffmpeg layer. After that I connect my lambda with this ffmpeg layer. When I test it I receive Success response, but image does not on server.

    const AWS = require("aws-sdk");
    const { spawnSync, spawn } = require("child_process");
    const { createReadStream, createWriteStream } = require("fs");
    const os = require("os");
    const path = require("path");
    const s3 = new AWS.S3();
    const ffmpegPath = "/opt/nodejs/ffmpeg";
    exports.handler = async event => {
     console.log(
       `VideoEditor lambda is ready to start.\nEvent: ${JSON.stringify(
         event,
         null,
         2
       )}`
     );
     const ENV = process.env.TARGET_ENV;
     console.log("ENV: " + ENV);
     const VIDEO_BUCKET =
       process.env.VIDEO_BUCKET || "video." + ENV + ".abc.com";
     console.log("VIDEO_BUCKET: " + VIDEO_BUCKET);
     const target = s3.getSignedUrl("getObject", {
       Bucket: VIDEO_BUCKET,
       Key: "test/0255f240-efef-11e9-862e-3949600f0ec9.mp4",
       Expires: 1000
     });
     const abc = "test/0255f240-efef-11e9-862e-3949600f0ec9.mp4";
     console.log("target: " + target);
     const fileName = abc.split(".")[0];
     const resolution = resolutionCalculator("banner");
     const width = resolution.split("x")[0];
     const height = resolution.split("x")[1];
     console.log(`vyska: ${height}, sirka ${width}`);
     const workdir = os.tmpdir();
     const screen = path.join(workdir, fileName + ".jpg");
     s3.getObject({
       Bucket: VIDEO_BUCKET,
       Key: "test/0255f240-efef-11e9-862e-3949600f0ec9.mp4"
     })
       .promise()
       .then((data) => {
       console.log("converting: " + screen);
       return spawn(
           "/opt/nodejs/ffmpeg",
           [
             "-i",
             data,
             "-ss",
             2,
             "-vframes",
             1,
             screen
           ],
           {
             env: process.env,
             cwd: workdir
           }
         )
           .promise()
           .then((data) => {
             const params = {
               Body: data,
               Bucket: VIDEO_BUCKET,
               Key: "test/shots/" + fileName + ".jpg"
             };
             console.log("params: " + params);
             s3.putObject(params, (err, data) => {
               if (err) {
                 console.log("s3.putObject error msg: " + err);
               }
             }).promise();
           });
       });
     console.log("createImage finish");
     return console.log(`processed ${VIDEO_BUCKET} successfully`);
     //Calculate resolution
    };
    function resolutionCalculator(type) {
     if (type == "banner") {
       return process.env.BANNER_RESOLUTION;
     } else {
       return process.env.FEED_RESOLUTION;
     }
    }
    exports.handler();

    When I test it I receive these logs :

  • INFO vyska : 200, sirka 300
  • INFO createImage finish
  • INFO processed video.devel.acme.lutherinfra.com successfully
  • , but I expect logs inside s3.getObject operation too.

  • How to save last 30 seconds of video in py

    5 juin 2024, par Mateus Coelho

    I want the last 30 seconds to be recorded every time I click enter and sent to the cloud. for example, if I click at 00:10:30, I want a video that records from 00:10:00 to 00:10:30 and if I click in sequence at 00:10:32, I need another different video that in its content is recorded from 00:10:02 to 00:10:32.

    


    I think I have a problem where I will always end up recovering from the same buffer in the last 30 seconds. Is there any approach so that whenever I click enter I retrieve a unique video ? Is my current approach the best for the problem ? Or should I use something else ?

    


    import subprocess
import os
import keyboard
from datetime import datetime
from google.cloud import storage

# Configuration
STATE = "mg"
CITY = "belohorizonte"
COURT = "duna"
RTSP_URL = "rtsp://Apertai:130355va@192.168.0.2/stream1"
BUCKET_NAME = "apertai-cloud"
CREDENTIALS_PATH = "C:/Users/Abidu/ApertAI/key.json"

def start_buffer_stream():
    # Command for continuous buffer that overwrites itself every 30 seconds
    buffer_command = [
        'ffmpeg',
        '-i', RTSP_URL,
        '-map', '0',
        '-c', 'copy',
        '-f', 'segment',
        '-segment_time', '30',  # Duration of each segment
        '-segment_wrap', '2',  # Number of segments to wrap around
        '-reset_timestamps', '1',  # Reset timestamps at the start of each segment
        'buffer-%03d.ts'  # Save segments with a numbering pattern
    ]
    return subprocess.Popen(buffer_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

def save_last_30_seconds_from_buffer(buffer_file):
    datetime_now = datetime.now()
    datetime_now_formatted = f"{datetime_now.day:02}{datetime_now.month:02}{datetime_now.year}-{datetime_now.hour:02}{datetime_now.minute:02}"
    output_file_name = os.path.abspath(f"{STATE}-{CITY}-{COURT}-{datetime_now_formatted}.mp4")

    # Copy the most recent buffer segment to the output file
    save_command = [
        'ffmpeg',
        '-i', buffer_file,
        '-c', 'copy',
        output_file_name
    ]
    subprocess.run(save_command, check=True)
    print(f"Saved last 30 seconds: {output_file_name}")
    return output_file_name

def upload_to_google_cloud(file_name):
    client = storage.Client.from_service_account_json(CREDENTIALS_PATH)
    bucket = client.bucket(BUCKET_NAME)
    blob = bucket.blob(os.path.basename(file_name).replace("-", "/"))
    blob.upload_from_filename(file_name, content_type='application/octet-stream')
    print(f"Uploaded {file_name} to {BUCKET_NAME}")
    os.remove(file_name)  # Clean up the local file

def main():
    print("Starting continuous buffer for RTSP stream...")
    start_time = datetime.now()
    buffer_process = start_buffer_stream()
    print("Press 'Enter' to save the last 30 seconds of video...")

    while True:
        # Verify if 30 seconds has passed since start
        if keyboard.is_pressed('enter'):
            print("Saving last 30 seconds of video...")
            elapsed_time = (datetime.now() - start_time).total_seconds()
            # Determine which buffer segment to save
            if elapsed_time % 60 < 30:
                buffer_file = 'buffer-000.ts'
            else:
                buffer_file = 'buffer-001.ts'
            final_video = save_last_30_seconds_from_buffer(buffer_file)
            upload_to_google_cloud(final_video)

if _name_ == "_main_":
    main()




    


  • Boussole SPIP

    SPIP.net-La documentation officielle et téléchargement de (...) SPIP Code-La documentation du code de SPIP Programmer SPIP-La documentation pour développer avec (...) Traduire SPIP-Espace de traduction de SPIP et de ses (...) Plugins SPIP-L'annuaire des plugins SPIP SPIP-Contrib-L'espace des contributions à SPIP Forge SPIP-L'espace de développement de SPIP et de ses (...) Discuter sur SPIP-Les nouveaux forums de la communauté (...) SPIP Party-L'agenda des apéros et autres rencontres (...) Médias SPIP-La médiathèque de SPIP SPIP Syntaxe-Tester l'édition de texte dans SPIP