Recherche avancée

Médias (1)

Mot : - Tags -/biographie

Autres articles (41)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (3997)

  • How to convert mp3 data to wav data ?

    12 mai 2023, par Yali

    I have a wav audio file and i extracted data from that wav using python pydub module and i got this data

    


    [-139 18 -215 34 -196 6 -295 -31 -301 -35 -211 13 -93 47
-60 39 -58 7 -17 2]

    


    (this is first 10 data i got more than 1 million data)

    


    from pydub import AudioSegment
import numpy as np

song = AudioSegment.from_file("test.wav")
extract_data = np.array(song.get_array_of_samples())
print(extract_data[:10])


    


    then i converted wav to mp3 using that module and again extracted data from mp3 file and i got this data

    


    [-108 7 -193 24 -223 11 -239 -31 -248 -43 -203 -10 -101 23
-14 24 10 15 24 16]

    


    (this is first 10 data i got more than 1 million data)

    


    song = AudioSegment.from_file("test.wav")
song.export("test.mp3")
mp3_song = AudioSegment.from_file("test.mp3")
extract_data = np.array(mp3_song.get_array_of_samples())
print(extract_data[:10])


    


    and again i converted mp3 to wav now i got mp3 data instead of wav data.

    


    mp3_song = AudioSegment.from_file("test.mp3")
mp3_song.export("test1.wav", format="wav")

song = AudioSegment.from_file("test1.wav")
extract_data = np.array(song.get_array_of_samples())
print(extract_data[:10])


    


    My point is how to convert mp3 data to original wav data ?

    


    please help me,

    


    Thanks.

    


  • How to save last 30 seconds of video in py

    5 juin 2024, par Mateus Coelho

    I want the last 30 seconds to be recorded every time I click enter and sent to the cloud. for example, if I click at 00:10:30, I want a video that records from 00:10:00 to 00:10:30 and if I click in sequence at 00:10:32, I need another different video that in its content is recorded from 00:10:02 to 00:10:32.

    


    I think I have a problem where I will always end up recovering from the same buffer in the last 30 seconds. Is there any approach so that whenever I click enter I retrieve a unique video ? Is my current approach the best for the problem ? Or should I use something else ?

    


    import subprocess
import os
import keyboard
from datetime import datetime
from google.cloud import storage

# Configuration
STATE = "mg"
CITY = "belohorizonte"
COURT = "duna"
RTSP_URL = "rtsp://Apertai:130355va@192.168.0.2/stream1"
BUCKET_NAME = "apertai-cloud"
CREDENTIALS_PATH = "C:/Users/Abidu/ApertAI/key.json"

def start_buffer_stream():
    # Command for continuous buffer that overwrites itself every 30 seconds
    buffer_command = [
        'ffmpeg',
        '-i', RTSP_URL,
        '-map', '0',
        '-c', 'copy',
        '-f', 'segment',
        '-segment_time', '30',  # Duration of each segment
        '-segment_wrap', '2',  # Number of segments to wrap around
        '-reset_timestamps', '1',  # Reset timestamps at the start of each segment
        'buffer-%03d.ts'  # Save segments with a numbering pattern
    ]
    return subprocess.Popen(buffer_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

def save_last_30_seconds_from_buffer(buffer_file):
    datetime_now = datetime.now()
    datetime_now_formatted = f"{datetime_now.day:02}{datetime_now.month:02}{datetime_now.year}-{datetime_now.hour:02}{datetime_now.minute:02}"
    output_file_name = os.path.abspath(f"{STATE}-{CITY}-{COURT}-{datetime_now_formatted}.mp4")

    # Copy the most recent buffer segment to the output file
    save_command = [
        'ffmpeg',
        '-i', buffer_file,
        '-c', 'copy',
        output_file_name
    ]
    subprocess.run(save_command, check=True)
    print(f"Saved last 30 seconds: {output_file_name}")
    return output_file_name

def upload_to_google_cloud(file_name):
    client = storage.Client.from_service_account_json(CREDENTIALS_PATH)
    bucket = client.bucket(BUCKET_NAME)
    blob = bucket.blob(os.path.basename(file_name).replace("-", "/"))
    blob.upload_from_filename(file_name, content_type='application/octet-stream')
    print(f"Uploaded {file_name} to {BUCKET_NAME}")
    os.remove(file_name)  # Clean up the local file

def main():
    print("Starting continuous buffer for RTSP stream...")
    start_time = datetime.now()
    buffer_process = start_buffer_stream()
    print("Press 'Enter' to save the last 30 seconds of video...")

    while True:
        # Verify if 30 seconds has passed since start
        if keyboard.is_pressed('enter'):
            print("Saving last 30 seconds of video...")
            elapsed_time = (datetime.now() - start_time).total_seconds()
            # Determine which buffer segment to save
            if elapsed_time % 60 < 30:
                buffer_file = 'buffer-000.ts'
            else:
                buffer_file = 'buffer-001.ts'
            final_video = save_last_30_seconds_from_buffer(buffer_file)
            upload_to_google_cloud(final_video)

if _name_ == "_main_":
    main()




    


  • Revision d205335060 : [svc] Finalize spatial svc first pass rate control 1. Save stats for each

    19 mars 2014, par Minghai Shang

    Changed Paths :
     Modify /examples/vp9_spatial_scalable_encoder.c


     Modify /test/svc_test.cc


     Modify /vp9/encoder/vp9_firstpass.c


     Modify /vp9/encoder/vp9_firstpass.h


     Modify /vp9/encoder/vp9_onyx_if.c


     Modify /vp9/encoder/vp9_onyx_int.h


     Modify /vp9/encoder/vp9_svc_layercontext.h


     Modify /vpx/src/svc_encodeframe.c


     Modify /vpx/vpx_encoder.h



    [svc] Finalize spatial svc first pass rate control

    1. Save stats for each spatial layer
    2. Add frame buffer management for svc first pass rc
    3. Set default spatial layer to 1
    4. Flush encoder at the end of stream in test app
    This only supports spatial svc.
    Change-Id : Ia89cfa87bb6394e6c0405b921d86c426d0a0c9ae