Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (71)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (12384)

  • How do I create and initialise a DXGI_FORMAT_NV12 resource in DX12 (source is AVFrame)

    5 janvier 2023, par mike

    I'm trying to create an NV12 resource as source for a video encoder in DX12. While I intend to eventually populate a resource from GPU, what I'm trying to do now is take an ffmpeg AVFrame I already have (in AV_PIX_FMT_YUV420P format) and create a texture in DXGI_FORMAT_NV12 format using that data.

    


    I understand the NV12 format (https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering#nv12) has U and V interleaved while the AV_PIX_FMT_YUV420P doesn't.

    


    My main question is what does the D3D12_RESOURCE_DESC look like for an NV12 texture - do I tell it I need more than one array/mip level to make it planar ? Or do I just give it a single memory address with both planes layed out as per the NV12 format, and it figures out subresources for me based on the format ?
    
I understand that to read the data I define two SRVs, one for Y mapped to the Red channel and a second for U and V, but it's how I initialise it that's confusing me.

    


  • Using ffmpeg to get a passthrough stream and create an audio resource for discordjs/voice ?

    5 novembre 2022, par Zer0

    I have access to an hls manifest file which i can use to download a song.
I use the following command to download it/pipe it to stdout :
ffmpeg -i 'https://api.someurl.com/1/2/3/stream.ismd/manifest.m3u8' -vn -sn -f flv -c copy -|ffmpeg -i - -b:a 192k -f mp3 -.

    


    I am successfully able to output a playable mp3 from this command.

    


    I am looking for a way to get the same kind of result in nodejs and get a readable stream from the final download and then pipe it to createAudioResource() to make it playable in a discord voice channel.

    


    There is a nodejs module called m3u8stream which i tried using with the code mentioned below but i couldnt get it to work as my knowledge of nodejs streams is limited and the documentation is confusing to understand.

    


    const stream = new PassThrough()
let resource
const stream = m3u8stream('https://api.someurl.com/1/2/3/stream.ismd/manifest.m3u8', {parser : 'm3u8',
requestOptions : {headers : {"content-type" : "video/mp4"}}
}).pipe(stream)
stream.on('data', d=>{resource = createAudioResource(d)})
player.play(resource)


    


    This code did not work and i am assuming this has something to do with m3u8stream itself because piping the stream to fs.createWriteStream('file.mp4') as shown in their example results in a zero kb mp4 file.

    


    I am sure there is be a better tool or a way to get a stream from the manifest and get it to play on discord and i would be happy to know about them as i am not bound to use ffmpeg for this.
Any help to make this work is appreciated. Thanks !

    


  • AWS lambda SAM deploy error - Template format error : Unresolved resource dependencies

    1er juin 2022, par mozenge

    I have am trying to deploy an aws lambda function using the SAM cli. I have some layers defined in the sam template. Testing locally using sam local start-api works quite well. The but deploying using the sam deploy --guided command throws the following error
Error: Failed to create changeset for the stack: sam-app, ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Template format error: Unresolved resource dependencies [arn:aws:lambda:us-west-1:338231645678:layer:ffmpeg:1] in the Resources block of the template

    


    The SAM template is as follows

    


    AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  video-processor-functions

  Functions to generate gif and thumbnail from uploaded videos
  
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
  Function:
    Timeout: 3
    Tracing: Active

Resources:
  VideoProcessorFunctions:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: src/
      Handler: app.lambdaHandler
      Runtime: nodejs14.x
      # timeout in seconds - 2 minutes
      Timeout: 120
      Layers:
        - !Ref VideoProcessorDepLayer
        - !Ref arn:aws:lambda:us-west-1:338231645678:layer:ffmpeg:1
      Architectures:
        - x86_64
      Events:
        HelloWorld:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: get

  VideoProcessorDepLayer:
    Type: AWS::Serverless::LayerVersion
    Properties:
      LayerName: mh-video-processor-dependencies
      Description: Dependencies for sam app [video-processor-functions]
      ContentUri: dependencies/
      CompatibleRuntimes:
        - nodejs14.17
      LicenseInfo: 'MIT'
      RetentionPolicy: Retain

Outputs:
  # ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
  # Find out more about other implicit resources you can reference within SAM
  # https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
  HelloWorldApi:
    Description: "API Gateway endpoint URL for Prod stage for Hello World function"
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
  VideoProcessorFunctions:
    Description: "Generate GIF and Thumnail from Video"
    Value: !GetAtt VideoProcessorFunctions.Arn
  VideoProcessorFunctionsIamRole:
    Description: "Implicit IAM Role created for MH Video Processor function"
    Value: !GetAtt VideoProcessorFunctionsRole.Arn



    


    Any ideas what i'm doing wrong ?