Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (86)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (11017)

  • How to pass bytesArray as input and output in command array for FFmpeg execute ?

    11 avril 2023, par Rohit gupta

    I was working with FFmpeg library where I was developing microphone-based reading and writeing the audioData, where i got the challenge to process some audioData. but i dont want to store the files as outputstream or something as it will add additional delays.

    


    This was my code that i used

    


    implementation 'com.arthenica:mobile-ffmpeg-full:4.2.2.LTS'


    


    private fun processAudioAsChipmunk(audioData: ShortArray, record: AudioRecord): ShortArray {
        val sampleRate = record.sampleRate
        val channels = record.channelCount
        val filter = "asetrate=2*${sampleRate},aresample=48000,atempo=2"
        val outputData = audioData
        val cmd = arrayOf(
            "-y",
            "-i",
            fileName1,  // here input filepath is passed, i want it to be ByteArray(audioData)
            "-af",
            "asetrate=22100,atempo=1/2",
            fileName2 //here output filepath is passed, i want it to be ByteArray(outputData)
        )//Chipmunk

        FFmpeg.execute(cmd)
        return outputData
    }



    


    Now i wanted to read and write the audioData without storing and using it as filenames.

    


    the below code shows the code i'm using to read and write audio Data

    


    read = record!!.read(audioData, 0, minBuffer) //AudioRecord
val processedAudioData = processAudioAsChipmunk(audioData, record!!) // Here! 
write = player!!.write(processedAudioData, 0, read) //AudioTrack


    


    Post thought ->

    


    I tried below code but again failed to pass input and output stream instead of filename inside executeAsync

    


    https://github.com/arthenica/ffmpeg-kit


    


    fun executeFfmpegCommand(audioData: ShortArray, rubberbandPitch: Float = 1.0f): ShortArray {
        // Convert short array to byte array
        val byteBuffer = ByteBuffer.allocate(audioData.size * 2)
        byteBuffer.order(ByteOrder.nativeOrder())
        for (i in audioData.indices) {
            byteBuffer.putShort(audioData[i])
        }
        val inputData = byteBuffer.array()

        // Set input data as input stream
        val inputStream = ByteArrayInputStream(inputData)

        // Create output stream to store output data
        val outputStream = ByteArrayOutputStream()

        // Add rubberband filter to command to make audio sound like chipmunk
        val rubberbandFilter = "rubberband=pitch=${rubberbandPitch}"
        val commandWithFilter = arrayOf("-f", "s16le", "-ar", "44100", "-ac", "2", "-i", "-", "-af", rubberbandFilter, "-f", "s16le", "-")

        // Create ffmpeg session
        val session = FFmpegKit.executeAsync() // @!! HOW TO PASS CommandFilter, inputStream, outPutStream here !!@

        // Wait for session to complete
        session.wait()

        // Check if session was successful
        if (ReturnCode.isSuccess(session.returnCode)) {
            Log.e("FFmpeg", "FFmpeg execution failed with return code: ${session.returnCode}")
            return ShortArray(0)
        }

        // Convert output stream to short array
        val outputData = outputStream.toByteArray()
        val shortBuffer = ByteBuffer.wrap(outputData)
            .order(ByteOrder.nativeOrder())
            .asShortBuffer()
        val outputShortArray = ShortArray(shortBuffer.remaining())
        shortBuffer.get(outputShortArray)

        // Return output short array
        return outputShortArray
    }



    


  • How to use ffmpeg to record audio from a video for first 10 seconds in Python

    13 novembre 2022, par S Andrew

    I have RTSP stream coming from a camera which also has audio. My goal is to save the audio. To do this, I have below code :

    


    import ffmpeg&#xA;ffmpeg.input("rtsp://john:<pwd>@192.168.10.111:5545/Streaming/Channels/291/").output("test.wav", map="0:a:0").run&#xA;</pwd>

    &#xA;

    When I terminate the Python script, it saves the test.wav file which has just the audio from the rtsp stream. Now I am trying to save the first 10 sec from the stream into 1 file and then the next 10sec in another file and then it keeps on going until terminated.

    &#xA;

    To do this, I have thought of putting the ffmpeg stream in a separate thread and to schedule that thread to run every 10 sec. This way a new stream will create which will save the audio for 10sec and will exit, and then this keeps on going. But to achieve this, I need to know how can we just save the initial 10 sec from the stream.

    &#xA;

  • How to use ffmpeg to record audio from a video for first 10seconds in Python

    2 novembre 2022, par S Andrew

    I have rtsp stream coming from a camera which also has audio. My goal is to save the audio. To do this, I have below code :

    &#xA;

    import ffmpeg&#xA;ffmpeg.input("rtsp://john:<pwd>@192.168.10.111:5545/Streaming/Channels/291/").output("test.wav", map="0:a:0").run&#xA;</pwd>

    &#xA;

    When I terminate the python script, it saves the test.wav file which has just the audio from the rtsp stream. Now I am trying to save the first 10sec from the stream into 1 file and then the next 10sec in another file and then it keeps on goining untill terminated.

    &#xA;

    To do this, I have thought of putting the ffmpeg stream in a seprate thread and to schedule that thread to run every 10 sec. This way a new stream will create which will save the audio for 10sec and will exit, and then this keeps on going. But to achieve this, I need to know how can we just save the intial 10 sec from the stream. Please help thanks.

    &#xA;