Recherche avancée

Médias (1)

Mot : - Tags -/intégration

Autres articles (100)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

Sur d’autres sites (11495)

  • formula to calculate video dimensions for target aspect ratio

    23 avril 2015, par Nevoxx

    i currently building an application that converts videos to a specific dimension under consideration of the target aspect ration.

    the target resolution must always be 1280x720, i want to fit the uploaded video in this size using black bars.

    can anyone point out a formula to calculate the target video dimensions without changing it’s aspect ratio ? if the target with is smaller then 1280x720 i’ll add the missing pixels as black bars.

    i’m using ffmpeg with the "-vf scale=1280x720,pad=?:?:black" command

    (sorry if my english is not that great)

  • Open Banking Security 101 : Is open banking safe ?

    3 décembre 2024, par Daniel Crough — Banking and Financial Services

    Open banking is changing the financial industry. Statista reports that open banking transactions hit $57 billion worldwide in 2023 and will likely reach $330 billion by 2027. According to ACI, global real-time payment (RTP) transactions are expected to exceed $575 billion by 2028.

    Open banking is changing how banking works, but is it safe ? And what are the data privacy and security implications for global financial service providers ?

    This post explains the essentials of open banking security and addresses critical data protection and compliance questions. We’ll explore how a privacy-first approach to data analytics can help you meet regulatory requirements, build customer trust and ultimately thrive in the open banking market while offering innovative financial products.

     

    Discover trends, strategies, and opportunities to balance compliance and competitiveness.

    What is open banking ?

    Open banking is a system that connects banks, authorised third-party providers and technology, empowering customers to securely share their financial data with other companies. At the same time, it unlocks access to more innovative and personalised financial products and services like spend management solutions, tailored budgeting apps and more convenient payment gateways. 

    With open banking, consumers have greater choice and control over their financial data, ultimately fostering a more competitive financial industry, supporting technological innovation and paving the way for a more customer-centric financial future.

    Imagine offering your clients a service that analyses spending habits across all accounts — no matter the institution — and automatically finds ways to save them money. Envision providing personalised financial advice tailored to individual needs or enabling customers to apply for a mortgage with just a few taps on their phone. That’s the power of open banking.

    Embracing this technology is an opportunity for banks and fintech companies to build new solutions for customers who are eager for a more transparent and personalised digital experience.

    How is open banking different from traditional banking ?

    In traditional banking, consumers’ financial data is locked away and siloed within each bank’s systems, accessible only to the bank and the account holder. While account holders could manually aggregate and share this data, the process is cumbersome and prone to errors.

    With open banking, users can choose what data to share and with whom, allowing trusted third-party providers to access their financial information directly from the source. 

    Side-by-side comparison between open banking and traditional banking showing the flow of financial information between the bank and the user with and without a third party.

    How does open banking work ?

    The technology that makes open banking possible is the application programming interface (API). Think of banking APIs as digital translators for different software systems ; instead of translating languages, they translate data and code.

    The bank creates and publishes APIs that provide secure access to specific types of customer data, like credit card transaction history and account balances. The open banking API acts like a friendly librarian, ready to assist apps in accessing the information they need in a secure and organised way.

    Third-party providers, like fintech companies, use these APIs to build their applications and services. Some tech companies also act as intermediaries between fintechs and banks to simplify connections to multiple APIs simultaneously.

    For example, banks like BBVA (Spain) and Capital One (USA) offer secure API platforms. Fintechs like Plaid and TrueLayer use those banking APIs as a bridge to users’ financial data. This bridge gives other service providers like Venmo, Robinhood and Coinbase access to customer data, allowing them to offer new payment gateways and investment tools that traditional banks don’t provide.

    Is open banking safe for global financial services ?

    Yes, open banking is designed from the ground up to be safe for global financial services.

    Open banking doesn’t make customer financial data publicly available. Instead, it uses a secure, regulated framework for sharing information. This framework relies on strong security measures and regulatory oversight to protect user data and ensure responsible access by authorised third-party providers.

    In the following sections, we’ll explore the key security features and banking regulations that make this technology safe and reliable.

    Regulatory compliance in open banking

    Regulatory oversight is a cornerstone of open banking security.

    In the UK and the EU, strict regulations govern how companies access and use customer data. The revised Payment Services Directive (PSD2) in Europe mandates strong customer authentication and secure communication, promoting a high level of security for open banking services.

    To offer open banking services, companies must register with their respective regulatory bodies and comply with all applicable data protection laws.

    For example, third-party service providers in the UK must be authorised by the Financial Conduct Authority (FCA) and listed on the Financial Services Register. Depending on the service they provide, they must get an Account Information Service Provider (AISP) or a Payment Initiation Service Provider (PISP) license.

    Similar regulations and registries exist across Europe, enforced by the European National Competent Authority, like BaFin in Germany and the ACPR in France.

    In the United States, open banking providers don’t require a special federal license. However, this will soon change, as the U.S. Consumer Financial Protection Bureau (CFPB) unveiled a series of rules on 22 October 2024 to establish a regulatory framework for open banking.

    These regulations ensure that only trusted providers can participate in the open banking ecosystem. Anyone can check if a company is a trusted provider on public databases like the Regulated Providers registry on openbanking.org.uk. While being registered doesn’t guarantee fair play, it adds a layer of safety for consumers and banks.

    Key open banking security features that make it safe for global financial services

    Open banking is built on a foundation of solid security measures. Let’s explore five key features that make it safe and reliable for financial institutions and their customers.

    List of the five most important features that make open banking safe for global finance

    Strong Customer Authentication (SCA)

    Strong Customer Authentication (SCA) is a security principle that protects against unauthorised access to user financial data. It’s a regulated and legally required form of multi-factor authentication (MFA) within the European Economic Area.

    SCA mandates that users verify their identity using at least two of the following three factors :

    • Something they know (a password, PIN, security question, etc.)
    • Something they have (a mobile phone, a hardware token or a bank card)
    • Something they are (a fingerprint, facial recognition or voice recognition)

    This type of authentication helps reduce the risk of fraud and unauthorised transactions.

    API security

    PSD2 regulations mandate that banks provide open APIs, giving consumers the right to use any third-party service provider for their online banking services. According to McKinsey research, this has led to a surge in API adoption within the banking sector, with the largest banks allocating 14% of their IT budget to APIs. 

    To ensure API security, banks and financial service providers implement several measures, including :

    • API gateways, which act as a central point of control for all API traffic, enforcing security policies and preventing unauthorised access
    • API keys and tokens to authenticate and authorise API requests (the equivalent of a library card for apps)
    • Rate limiting to prevent denial-of-service attacks by limiting the number of requests a third-party application can make within a specific timeframe
    • Regular security audits and penetration testing to identify and address potential vulnerabilities in the API infrastructure

    Data minimisation and purpose limitation

    Data minimisation and purpose limitation are fundamental principles of data protection that contribute significantly to open banking safety.

    Data minimisation means third parties will collect and process only the data necessary to provide their service. Purpose limitation requires them to use the collected data only for its original purpose.

    For example, a budgeting app that helps users track their spending only needs access to transaction history and account balances. It doesn’t need access to the user’s full transaction details, investment portfolio or loan applications.

    Limiting the data collected from individual banks significantly reduces the risk of potential misuse or exposure in a data breach.

    Encryption

    Encryption is a security method that protects data in transit and at rest. It scrambles data into an unreadable format, making it useless to anyone without the decryption key.

    In open banking, encryption protects users’ data as it travels between the bank and the third-party provider’s systems via the API. It also protects data stored on the bank’s and the provider’s servers. Encryption ensures that even if a breach occurs, user data remains confidential.

    Explicit consent

    In open banking, before a third-party provider can access user data, it must first inform the user what data it will pull and why. The customer must then give their explicit consent to the third party collecting and processing that data.

    This transparency and control are essential for building trust and ensuring customers feel safe using third-party services.

    But beyond that, from the bank’s perspective, explicit customer consent is also vital for compliance with GDPR and other data protection regulations. It can also help limit the bank’s liability in case of a data breach.

    Explicit consent goes beyond sharing financial data. It’s also part of new data privacy regulations around tracking user behaviour online. This is where an ethical web analytics solution like Matomo can be invaluable. Matomo fully complies with some of the world’s strictest privacy regulations, like GDPR, lGPD and HIPAA. With Matomo, you get peace of mind knowing you can continue gathering valuable insights to improve your services and user experience while respecting user privacy and adhering to regulations.

    Risks of open banking for global financial services

    While open banking offers significant benefits, it’s crucial to acknowledge the associated risks. Understanding these risks allows financial institutions to implement safeguards and protect themselves and their customers.

    List of the three key risks that banks should always keep in mind.

    Risk of data breaches

    By its nature, open banking is like adding more doors and windows to your house. It’s convenient but also gives burglars more ways to break in.

    Open banking increases what cybersecurity professionals call the “attack surface,” or the number of potential points of vulnerability for hackers to steal financial data.

    Data breaches are a serious threat to banks and financial institutions. According to IBM’s 2024 Cost of a Data Breach Report, each breach costs companies in the US an average of $4.88 million. Therefore, banks and fintechs must prioritise strong security measures and data protection protocols to mitigate these risks.

    Risk of third-party access

    By definition, open banking involves granting third-party providers access to customer financial information. This introduces a level of risk outside the bank’s direct control.

    Financial institutions must carefully vet third-party providers, ensuring they meet stringent security standards and comply with all relevant data protection regulations.

    Risk of user account takeover

    Open banking can increase the risk of user account takeover if adequate security measures are not in place. For example, if a malicious third-party provider gains unauthorised access to a user’s bank login details, they could take control of the user’s account and make fraudulent bank transactions.

    A proactive approach to security, continuous monitoring and a commitment to evolving best practices and security protocols are crucial for navigating the open banking landscape.

    Open banking and data analytics : A balancing act for financial institutions

    The additional data exchanged through open banking unveils deeper insights into customer behaviour and preferences. This data can fuel innovation, enabling the development of personalised products and services and improved risk management strategies.

    However, using this data responsibly requires a careful balancing act.

    Too much reliance on data without proper safeguards can erode trust and invite regulatory issues. The opposite can stifle innovation and limit the technology’s potential.

    Matomo Analytics derisks web and app environments by giving full control over what data is tracked and how it is stored. The platform prioritises user data privacy and security while providing valuable data and analytics that will be familiar to anyone who has used Google Analytics.

    Open banking, data privacy and AI

    The future of open banking is entangled with emerging technologies like artificial intelligence (AI) and machine learning. These technologies significantly enhance open banking analytics, personalise services, and automate financial tasks.

    Several banks, credit unions and financial service providers are already exploring AI’s potential in open banking. For example, HSBC developed the AI-enabled FX Prompt in 2023 to improve forex trading. The bank processed 823 million client API calls, many of which were open banking.

    However, using AI in open banking raises important data privacy considerations. As the American Bar Association highlights, balancing personalisation with responsible AI use is crucial for open banking’s future. Financial institutions must ensure that AI-driven solutions are developed and implemented ethically, respecting customer privacy and data protection.

    Conclusion

    Open banking presents a significant opportunity for innovation and growth in the financial services industry. While it’s important to acknowledge the associated risks, security measures like explicit customer consent, encryption and regulatory frameworks make open banking a safe and reliable system for banks and their clients.

    Financial service providers must adopt a multifaceted approach to data privacy, implementing privacy-centred solutions across all aspects of their business, from open banking to online services and web analytics.

    By prioritising data privacy and security, financial institutions can build customer trust, unlock the full potential of open banking and thrive in today’s changing financial environment.

  • VP8 : a retrospective

    13 juillet 2010, par Dark Shikari — DCT, speed, VP8

    I’ve been working the past few weeks to help finish up the ffmpeg VP8 decoder, the first community implementation of On2′s VP8 video format. Now that I’ve written a thousand or two lines of assembly code and optimized a good bit of the C code, I’d like to look back at VP8 and comment on a variety of things — both good and bad — that slipped the net the first time, along with things that have changed since the time of that blog post.

    These are less-so issues related to compression — that issue has been beaten to death, particularly in MSU’s recent comparison, where x264 beat the crap out of VP8 and the VP8 developers pulled a Pinocchio in the developer comments. But that was expected and isn’t particularly interesting, so I won’t go into that. VP8 doesn’t have to be the best in the world in order to be useful.

    When the ffmpeg VP8 decoder is complete (just a few more asm functions to go), we’ll hopefully be able to post some benchmarks comparing it to libvpx.

    1. The spec, er, I mean, bitstream guide.

    Google has reneged on their claim that a spec existed at all and renamed it a “bitstream guide”. This is probably after it was found that — not merely was it incomplete — but at least a dozen places in the spec differed wildly from what was actually in their own encoder and decoder software ! The deblocking filter, motion vector clamping, probability tables, and many more parts simply disagreed flat-out with the spec. Fortunately, Ronald Bultje, one of the main authors of the ffmpeg VP8 decoder, is rather skilled at reverse-engineering, so we were able to put together a matching implementation regardless.

    Most of the differences aren’t particularly important — they don’t have a huge effect on compression or anything — but make it vastly more difficult to implement a “working” VP8 decoder, or for that matter, decide what “working” really is. For example, Google’s decoder will, if told to “swap the ALT and GOLDEN reference frames”, overwrite both with GOLDEN, because it first sets GOLDEN = ALT, and then sets ALT = GOLDEN. Is this a bug ? Or is this how it’s supposed to work ? It’s hard to tell — there isn’t a spec to say so. Google says that whatever libvpx does is right, but I doubt they intended this.

    I expect a spec will eventually be written, but it was a bit obnoxious of Google — both to the community and to their own developers — to release so early that they didn’t even have their own documentation ready.

    2. The TM intra prediction mode.

    One thing I glossed over in the original piece was that On2 had added an extra intra prediction mode to the standard batch that H.264 came with — they replaced Planar with “TM pred”. For i4x4, which didn’t have a Planar mode, they just added it without replacing an old one, resulting in a total of 10 modes to H.264′s 9. After understanding and writing assembly code for TM pred, I have to say that it is quite a cool idea. Here’s how it works :

    1. Let us take a block of size 4×4, 8×8, or 16×16.

    2. Define the pixels bordering the top of this block (starting from the left) as T[0], T[1], T[2]…

    3. Define the pixels bordering the left of this block (starting from the top) as L[0], L[1], L[2]…

    4. Define the pixel above the top-left of the block as TL.

    5. Predict every pixel <X,Y> in the block to be equal to clip3( T[X] + L[Y] – TL, 0, 255).

    It’s effectively a generalization of gradient prediction to the block level — predict each pixel based on the gradient between its top and left pixels, and the topleft. According to the VP8 devs, it’s chosen by the encoder quite a lot of the time, which isn’t surprising ; it seems like a pretty good idea. As just one more intra pred mode, it’s not going to do magic for compression, but it’s a cool idea and elegantly simple.

    3. Performance and the deblocking filter.

    On2 advertised for quite some that VP8′s goal was to be significantly faster to decode than H.264. When I saw the spec, I waited for the punchline, but apparently they were serious. There’s nothing wrong with being of similar speed or a bit slower — but I was rather confused as to the fact that their design didn’t match their stated goal at all. What apparently happened is they had multiple profiles of VP8 — high and low complexity profiles. They marketed the performance of the low complexity ones while touting the quality of the high complexity ones, a tad dishonest. More importantly though, practically nobody is using the low complexity modes, so anyone writing a decoder has to be prepared to handle the high complexity ones, which are the default.

    The primary time-eater here is the deblocking filter. VP8, being an H.264 derivative, has much the same problem as H.264 does in terms of deblocking — it spends an absurd amount of time there. As I write this post, we’re about to finish some of the deblocking filter asm code, but before it’s committed, up to 70% or more of total decoding time is spent in the deblocking filter ! Like H.264, it suffers from the 4×4 transform problem : a 4×4 transform requires a total of 8 length-16 and 8 length-8 loopfilter calls per macroblock, while Theora, with only an 8×8 transform, requires half that.

    This problem is aggravated in VP8 by the fact that the deblocking filter isn’t strength-adaptive ; if even one 4×4 block in a macroblock contains coefficients, every single edge has to be deblocked. Furthermore, the deblocking filter itself is quite complicated ; the “inner edge” filter is a bit more complex than H.264′s and the “macroblock edge” filter is vastly more complicated, having two entirely different codepaths chosen on a per-pixel basis. Of course, in SIMD, this means you have to do both and mask them together at the end.

    There’s nothing wrong with a good-but-slow deblocking filter. But given the amount of deblocking one needs to do in a 4×4-transform-based format, it might have been a better choice to make the filter simpler. It’s pretty difficult to beat H.264 on compression, but it’s certainly not hard to beat it on speed — and yet it seems VP8 missed a perfectly good chance to do so. Another option would have been to pick an 8×8 transform instead of 4×4, reducing the amount of deblocking by a factor of 2.

    And yes, there’s a simple filter available in the low complexity profile, but it doesn’t help if nobody uses it.

    4. Tree-based arithmetic coding.

    Binary arithmetic coding has become the standard entropy coding method for a wide variety of compressed formats, ranging from LZMA to VP6, H.264 and VP8. It’s simple, relatively fast compared to other arithmetic coding schemes, and easy to make adaptive. The problem with this is that you have to come up with a method for converting non-binary symbols into a list of binary symbols, and then choosing what probabilities to use to code each one. Here’s an example from H.264, the sub-partition mode symbol, which is either 8×8, 8×4, 4×8, or 4×4. encode_decision( context, bit ) writes a binary decision (bit) into a numbered context (context).

    8×8 : encode_decision( 21, 0 ) ;

    8×4 : encode_decision( 21, 1 ) ; encode_decision( 22, 0 ) ;

    4×8 : encode_decision( 21, 1 ) ; encode_decision( 22, 1 ) ; encode_decision( 23, 1 ) ;

    4×4 : encode_decision( 21, 1 ) ; encode_decision( 22, 1 ) ; encode_decision( 23, 0 ) ;

    As can be seen, this is clearly like a Huffman tree. Wouldn’t it be nice if we could represent this in the form of an actual tree data structure instead of code ? On2 thought so — they designed a simple system in VP8 that allowed all binarization schemes in the entire format to be represented as simple tree data structures. This greatly reduces the complexity — not speed-wise, but implementation-wise — of the entropy coder. Personally, I quite like it.

    5. The inverse transform ordering.

    I should at some point write a post about common mistakes made in video formats that everyone keeps making. These are not issues that are patent worries or huge issues for compression — just stupid mistakes that are repeatedly made in new video formats, probably because someone just never asked the guy next to him “does this look stupid ?” before sticking it in the spec.

    One common mistake is the problem of transform ordering. Every sane 2D transform is “separable” — that is, it can be done by doing a 1D transform vertically and doing the 1D transform again horizontally (or vice versa). The original iDCT as used in JPEG, H.263, and MPEG-1/2/4 was an “idealized” iDCT — nobody had to use the exact same iDCT, theirs just had to give very close results to a reference implementation. This ended up resulting in a lot of practical problems. It was also slow ; the only way to get an accurate enough iDCT was to do all the intermediate math in 32-bit.

    Practically every modern format, accordingly, has specified an exact iDCT. This includes H.264, VC-1, RV40, Theora, VP8, and many more. Of course, with an exact iDCT comes an exact ordering — while the “real” iDCT can be done in any order, an exact iDCT usually requires an exact order. That is, it specifies horizontal and then vertical, or vertical and then horizontal.

    All of these transforms end up being implemented in SIMD. In SIMD, a vertical transform is generally the only option, so a transpose is added to the process instead of doing a horizontal transform. Accordingly, there are two ways to do it :

    1. Transpose, vertical transform, transpose, vertical transform.

    2. Vertical transform, transpose, vertical transform, transpose.

    These may seem to be equally good, but there’s one catch — if the transpose is done first, it can be completely eliminated by merging it into the coefficient decoding process. On many modern CPUs, particularly x86, transposes are very expensive, so eliminating one of the two gives a pretty significant speed benefit.

    H.264 did it way 1).

    VC-1 did it way 1).

    Theora (inherited from VP3) did it way 1).

    But no. VP8 has to do it way 2), where you can’t eliminate the transpose. Bah. It’s not a huge deal ; probably only 1-2% overall at most speed-wise, but it’s just a needless waste. What really bugs me is that VP3 got it right — why in the world did they screw it up this time around if they got it right beforehand ?

    RV40 is the other modern format I know that made this mistake.

    (NB : You can do transforms without a transpose, but it’s generally not worth it unless the intermediate needs 32-bit math, as in the case of the “real” iDCT.)

    6. Not supporting interlacing.

    THANK YOU THANK YOU THANK YOU THANK YOU THANK YOU THANK YOU THANK YOU.

    Interlacing was the scourge of H.264. It weaseled its way into every nook and cranny of the spec, making every decoder a thousand lines longer. H.264 even included a highly complicated — and effective — dedicated interlaced coding scheme, MBAFF. The mere existence of MBAFF, despite its usefulness for broadcasters and others still stuck in the analog age with their 1080i, 576i , and 480i content, was a blight upon the video format.

    VP8 has once and for all avoided it.

    And if anyone suggests adding interlaced support to the experimental VP8 branch, find a straightjacket and padded cell for them before they cause any real damage.