Recherche avancée

Médias (91)

Autres articles (18)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Création définitive du canal

    12 mars 2010, par

    Lorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
    A la validation, vous recevez un email vous invitant donc à créer votre canal.
    Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
    A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (3659)

  • Heroic Defender of the Stack

    27 janvier 2011, par Multimedia Mike — Programming

    Problem Statement

    I have been investigating stack smashing and countermeasures (stack smashing prevention, or SSP). Briefly, stack smashing occurs when a function allocates a static array on the stack and writes past the end of it, onto other local variables and eventually onto other function stack frames. When it comes time to return from the function, the return address has been corrupted and the program ends up some place it really shouldn’t. In the best case, the program just crashes ; in the worst case, a malicious party crafts code to exploit this malfunction.

    Further, debugging such a problem is especially obnoxious because by the time the program has crashed, it has already trashed any record (on the stack) of how it got into the errant state.

    Preventative Countermeasure

    GCC has had SSP since version 4.1. The computer inserts SSP as additional code when the -fstack-protector command line switch is specified. Implementation-wise, SSP basically inserts a special value (the literature refers to this as the ’canary’ as in "canary in the coalmine") at the top of the stack frame when entering the function, and code before leaving the function to make sure the canary didn’t get stepped on. If something happens to the canary, the program is immediately aborted with a message to stderr about what happened. Further, gcc’s man page on my Ubuntu machine proudly trumpets that this functionality is enabled per default ever since Ubuntu 6.10.

    And that’s really all there is to it. Your code is safe from stack smashing by default. Or so the hand-wavy documentation would have you believe.

    Not exactly

    Exercising the SSP

    I wanted to see the SSP in action to make sure it was a real thing. So I wrote some code that smashes the stack in pretty brazen ways so that I could reasonably expect to trigger the SSP (see later in this post for the code). Here’s what I learned that wasn’t in any documentation :

    SSP is only emitted for functions that have static arrays of 8-bit data (i.e., [unsigned] chars). If you have static arrays of other data types (like, say, 32-bit ints), those are still fair game for stack smashing.

    Evaluating the security vs. speed/code size trade-offs, it makes sense that the compiler wouldn’t apply this protection everywhere (I can only muse about how my optimization-obsessive multimedia hacking colleagues would absolute freak out if this code were unilaterally added to all functions). So why are only static char arrays deemed to be "vulnerable objects" (the wording that the gcc man page uses) ? A security hacking colleague suggested that this is probably due to the fact that the kind of data which poses the highest risk is arrays of 8-bit input data from, e.g., network sources.

    The gcc man page also lists an option -fstack-protector-all that is supposed to protect all functions. The man page’s definition of "all functions" perhaps differs from my own since invoking the option does not have differ in result from plain, vanilla -fstack-protector.

    The Valgrind Connection

    "Memory trouble ? Run Valgrind !" That may as well be Valgrind’s marketing slogan. Indeed, it’s the go-to utility for finding troublesome memory-related problems and has saved me on a number of occasions. However, it must be noted that it is useless for debugging this type of problem. If you understand how Valgrind works, this makes perfect sense. Valgrind operates by watching all memory accesses and ensuring that the program is only accessing memory to which it has privileges. In the stack smashing scenario, the program is fully allowed to write to that stack space ; after all, the program recently, legitimately pushed that return value onto the stack when calling the errant, stack smashing function.

    Valgrind embodies a suite of tools. My idea for an addition to this suite would be a mechanism which tracks return values every time a call instruction is encountered. The tool could track the return values in a separate stack data structure, though this might have some thorny consequences for some more unusual program flows. Instead, it might track them in some kind of hash/dictionary data structure and warn the programmer whenever a ’ret’ instruction is returning to an address that isn’t in the dictionary.

    Simple Stack Smashing Code

    Here’s the code I wrote to test exactly how SSP gets invoked in gcc. Compile with ’gcc -g -O0 -Wall -fstack-protector-all -Wstack-protector stack-fun.c -o stack-fun’.

    stack-fun.c :

    C :
    1. /* keep outside of the stack frame */
    2. static int i ;
    3.  
    4. void stack_smasher32(void)
    5. {
    6.  int buffer32[8] ;
    7.  // uncomment this array and compile without optimizations
    8.  // in order to force this function to compile with SSP
    9. // char buffer_to_trigger_ssp[8] ;
    10.  
    11.  for (i = 0 ; i <50 ; i++)
    12.   buffer32[i] = 0xA5 ;
    13. }
    14.  
    15. void stack_smasher8(void)
    16. {
    17.  char buffer8[8] ;
    18.  for (i = 0 ; i <50 ; i++)
    19.   buffer8[i] = 0xA5 ;
    20. }
    21.  
    22. int main()
    23. {
    24. // stack_smasher8() ;
    25.  stack_smasher32() ;
    26.  return 0 ;
    27. }

    The above incarnation should just produce the traditional "Segmentation fault". However, uncommenting and executing stack_smasher8() in favor of stack_smasher32() should result in "*** stack smashing detected *** : ./stack-fun terminated", followed by the venerable "Segmentation fault".

    As indicated in the comments for stack_smasher32(), it’s possible to trick the compiler into emitting SSP for a function by inserting an array of at least 8 bytes (any less and SSP won’t emit, as documented, unless gcc’s ssp-buffer-size parameter is tweaked). This has to be compiled with no optimization at all (-O0) or else the compiler will (quite justifiably) optimize away the unused buffer and omit SSP.

    For reference, I ran my tests on Ubuntu 10.04.1 with gcc 4.4.3 compiling the code for both x86_32 and x86_64.

  • What Is Ethical SEO & Why Does It Matter ?

    7 mai 2024, par Erin

    Do you want to generate more revenue ?

    Then, you need to ensure you have a steady stream of traffic flowing to your site.

    Search engines like Google, Bing and Yahoo are powerful mediums you can use to scale your business.

    Search engine optimisation (SEO) is the process of creating search engine-friendly content to draw in traffic to your website. But, if you aren’t careful, you could be crossing the line of ethical SEO into unethical SEO.

    In this article, we break down what ethical SEO is, why it’s important in business and how you can implement effective SEO into your business while remaining ethical.

    Let’s begin.

    What is ethical SEO ?

    Since the early days of the internet and search engines, business owners and marketers have tried using all kinds of SEO tactics to rank atop the search engines for relevant keywords.

    The problem ?

    Some of these practices are ethical, while others aren’t.

    What exactly is ethical SEO ?

    It’s the practice of optimising your website’s rankings in search engines by following search engine guidelines and prioritising user experience.

    What is ethical SEO?

    Ethical SEO is also referred to as “white hat SEO.”

    On the other hand, businesses that break search engine rules and guidelines to “hack” their way to the top with faulty and questionable practices use unethical SEO, or “black hat SEO.”

    Ethical SEO aims to achieve higher rankings in search engines through sustainable, legitimate and fair methods.

    Black hat, or unethical SEO, aims to manipulate or “game” the system with deceptive strategies to bypass the search engine’s guidelines to rank higher.

    The two core branches of ethical SEO include :

    1. Strategies that align with search engine guidelines.
    2. Accessibility to broad audiences.

    Some examples of ethical SEO principles include :

    • Natural link building
    • Compliance with search engine guidelines
    • Establishing great user experiences
    • Creating reader-focused content

    By sticking to the right guidelines and implementing proper SEO practices, businesses can establish ethical SEO to generate more traffic and grow their brands.

    8 ethical SEO practices to implement

    If you want to grow your organic search traffic, then there’s no doubt you’ll need to have some SEO knowledge.

    While there are dozens of ways to “game” SEO, it’s best to stick to proven, ethical SEO techniques to improve your rankings.

    Stick to these best practices to increase your rankings in the search engine results pages (SERPs), increase organic traffic and improve your website conversions.

    8 Ethical SEO Practices to Implement

    1. Crafting high-quality content

    The most important piece of any ethical SEO strategy is content.

    Forget about rankings, keywords and links for a second.

    Step back and think about why people go to Google, Bing and Yahoo in the first place.

    They’re there looking for information. They have a question they need answered. That’s where you can come in and give them the answer they want. 

    How ? In the form of content.

    The best long-term ethical SEO strategy is to create the highest-quality content possible. Crafting high-quality content should be where you focus 90% of your SEO efforts.

    2. Following search engine guidelines

    Once you’ve got a solid content creation strategy, where you’re producing in-depth, quality content, you need to ensure you’re following the guidelines and rules put in place by the major search engines.

    This means you need to stay compliant with the best practices and guidelines laid out by the top search engines.

    If you fail to follow these rules, you could be penalised, your content could be downgraded or removed from search engines, and you could even have your entire website flagged, impacting your entire organic search traffic from your site.

    You need to ensure you align with the guidelines so you’re set up for long-term success with your SEO.

    3. Conducting keyword research and optimisation

    Now that we’ve covered content and guidelines, let’s talk about the technical stuff, starting with keywords.

    In the early days of SEO (late 90s), just about anyone could rank a web page high by stuffing keywords all over the page.

    While those black hat techniques used to work to “game” the system, it doesn’t work like that anymore. Google and other major search engines have much more advanced algorithms that can detect keyword stuffing and manipulation.

    Keywords are still a major part of a successful SEO strategy. You can ethically incorporate keywords into your content (and you should) if you want to rank higher. 

    Your main goal with your content is to match it with the search intent. So, incorporating keywords should come naturally throughout your content. If you try to stuff in unnecessary keywords or use spammy techniques, you may not even rank at all and could harm your website’s rankings.

    4. Incorporating natural link building

    After you’ve covered content and keywords, it’s time to dive into links. Backlinks are any links that point back to your website from another website.

    These are a crucial part of the SEO pie. Without them, it’s hard to rank high on Google. They work well because they tell Google your web page or website has authority on a subject matter.

    But you could be penalised if you try to manipulate backlinks by purchasing them or spamming them from other websites.

    Instead, you should aim to draw in natural backlinks by creating content that attracts them.

    How ? There are several options :

    • Content marketing
    • Email outreach
    • Brand mentions
    • Public relations
    • Ethical guest posting

    Get involved in other people’s communities. Get on podcasts. Write guest posts. Connect with other brands. Provide value in your niche and create content worth linking to.

    5. Respecting the intellectual property of other brands

    Content creation is moving at lightspeed in the creator economy and social media era. For better or for worse, content is going viral every day. People share content, place their spin on it, revise it, optimise it, and spread it around the internet.

    Unfortunately, this means the content is sometimes shared without the owner’s permission. Content is one form of intellectual property (IP). 

    If you share copyrighted material, you could face legal consequences.

    6. Ensuring transparency

    Transparency is one of the pillars of ethical marketing.

    If you’re running the SEO in your company or an agency, you should always explain the SEO strategies and tactics you’re implementing to your stakeholders.

    It’s best to lean on transparency and honesty to ensure your team knows you’re running operations ethically.

    7. Implementing a great user experience

    The final pillar of ethical SEO practices is offering a great user experience on your website.

    Major search engines like Google are favouring user experience more and more every year. This means knowing how to track and analyse website metrics like page load times, time on page, pageviews, media plays and event tracking.

    8. Use an ethical web analytics solution

    Last but certainly not least. Tracking your website visitors ethically is key to maintaining SEO ethics.

    You can do this by using an ethical web analytics solution like Matomo, Plausible or Fathom. All three are committed to respecting user privacy and offer ethical tracking of visitors.

    We’re a bit biassed towards Matomo, of course, but for good reasons.

    Matomo offers accurate, unsampled data along with advanced features like heatmaps, session recording, and A/B testing. These features enhance user experience and support ethical SEO practices by providing insights into user behaviour, helping optimise content. 

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    6 unethical SEO practices to avoid

    Now that we’ve covered the ethical SEO best practices let’s talk about what kind of unethical SEO practices you want to avoid.

    Remember, SEO isn’t as easy to manipulate as it once was 20 years ago.

    Algorithms are much more sophisticated now, and search engines are getting better at detecting fraudulent, scammy or unethical SEO practices every year.

    Avoid these eight unethical SEO practices to ensure you can rank high in the long term :

    6 unethical SEO practices to avoid.

    1. Keyword stuffing

    Keyword stuffing is probably the most common unethical SEO practice. This is where someone deliberately stuffs keywords onto a page to manipulate the search engines to rank a web page higher.

    Where this is unethical isn’t always easy to detect, but in some cases, it is. It comes down to whether it’s relevant and natural or intentionally stuffing.

    2. Cloaking

    Cloaking is another unethical SEO practice where someone manipulates the information search engines see on their website.

    For example, someone may show search engines one web page on their website, but when someone clicks on it in Google, they can direct someone to a completely different page. They do this by detecting the incoming request from the user agent and presenting different content.

    3. Deceiving functionality

    Another way companies are unethically implementing SEO tactics is by deceiving people with misleading information. For example, a website may claim to provide a free resource or directory but may intentionally lead visitors to paid products.

    4. Fraudulent redirects

    Another way to deceive or mislead searchers is by creating fraudulent redirects. A redirect is a way to take someone to a different web page when they click on another one. Redirects can be useful if a page is broken or outdated. However, they can be used to deceptively take someone to a website they didn’t intend to view.

    5. Negative SEO

    Negative SEO is the intentional attempt to harm a competitor’s search engine rankings through unethical tactics.

    These tactics include duplicating their content or generating spammy links by creating low quality or irrelevant backlinks to their site.

    6. Hidden text

    Placing hidden text on a website typically has one purpose : keyword stuffing.

    Instead of making it visible to users reading the content, websites will place invisible text or text that’s hard to read on a website to try to rank the content higher and manipulate the search engines.

    3 reasons you need to implement ethical SEO

    So, why should you ensure you only implement ethical SEO in your organic traffic strategy ?

    It’s not just about what’s morally right or wrong. Implementing ethical SEO is the smartest long-term marketing strategy :

    1. Better long-term SEO

    Search engine optimisation is about implementing the “right” tactics to get your website to rank higher.

    The funny thing is many people are trying to get quick fixes by manipulating search engines to see results now.

    However, the ones who implement shady tactics and “hacks” to game the system almost always end up losing their rankings in the long term. 

    The best long-term SEO strategy is to do things ethically. Create content that helps people. Make higher quality content than your competitors. If you do those two things right, you’ll have better search traffic for years.

    2. Great brand reputation

    Not only is ethical SEO a great way to get long-term results, but it’s also a good way to maintain a solid brand reputation.

    Reputation management is a crucial aspect of SEO. All it takes is one bad incident, and your SEO could be negatively impacted.

    3. Lower chance of penalties

    If you play by the rules, you have a lower risk of being penalised by Google.

    The reality is that Google owns the search engine, not you. While we can benefit from the traffic generation of major search engines, you could lose all your rankings if you break their guidelines.

    Track SEO data ethically with Matomo

    Ethical SEO is all about :

    • Serving your audience
    • Getting better traffic in the long run

    If you fail to follow ethical SEO practices, you could be de-ranked or have your reputation on the line.

    However, if you implement ethical SEO, you could reap the rewards of a sustainable marketing strategy that helps you grow your traffic correctly and increase conversions in the long term.

    If you’re ready to start implementing ethical SEO, you need to ensure you depend on an ethical web analytics solution like Matomo.

    Unlike other web analytics solutions, Matomo prioritises user privacy, maintains transparent, ethical data collection practices, and does not sell user data to advertisers. Matomo provides 100% data ownership, ensuring that your data remains yours to own and control.

    As the leading privacy-friendly web analytics solution globally, trusted by over 1 million websites, Matomo ensures :

    • Accurate data without data sampling for confident insights and better results
    • Privacy-friendly and GDPR-compliant web analytics
    • Open-source access for transparency and creating a custom solution tailored to your needs

    Try Matomo free for 21-days. No credit card required.

  • Writing A Dreamcast Media Player

    6 janvier 2017, par Multimedia Mike — Sega Dreamcast

    I know I’m not the only person to have the idea to port a media player to the Sega Dreamcast video game console. But I did make significant progress on an implementation. I’m a little surprised to realize that I haven’t written anything about it on this blog yet, given my propensity for publishing my programming misadventures.


    3 Dreamcast consoles in a row

    This old effort had been on my mind lately due to its architectural similarities to something else I was recently brainstorming.

    Early Days
    Porting a multimedia player was one of the earliest endeavors that I embarked upon in the multimedia domain. It’s a bit fuzzy for me now, but I’m pretty sure that my first exposure to the MPlayer project in 2001 arose from looking for a multimedia player to port. I fed it through the Dreamcast development toolchain but encountered roadblocks pretty quickly. However, this got me looking at the MPlayer source code and made me wonder how I could contribute, which is how I finally broke into practical open source multimedia hacking after studying the concepts and technology for more than a year at that point.

    Eventually, I jumped over to the xine project. After hacking on that for awhile, I remembered my DC media player efforts and endeavored to compile xine to the console. The first attempt was to simply compile the codebase using the Dreamcast hobbyist community’s toolchain. This is when I came to fear the multithreaded snake pit in xine’s core. Again, my memories are hazy on the specifics, but I remember the engine having a bunch of threading hacks with comments along the lines of “this code deadlocks sometimes, so on shutdown, monitor this lock and deliberately break it if it has been more than 3 seconds”.

    Something Workable
    Eventually, I settled on a combination of FFmpeg’s libavcodec library for audio and video decoders, xine’s demuxer library, and xine’s input API, combined with my own engine code to tie it all together along with video and output drivers provided by the KallistiOS hobbyist OS for Dreamcast. Here is a simple diagram of the data movement through this player :


    Architecture diagram for a Sega Dreamcast media player

    Details and Challenges
    This is a rare occasion when I actually got to write the core of a media player engine. I made some mistakes.

    xine’s internal clock ran at 90000 Hz. At least, its internal timestamps were all in reference to a 90 kHz clock. I got this brilliant idea to trigger timer interrupts at 6000 Hz to drive the engine. Whatever the timer facilities on the Dreamcast, I found that 6 kHz was the greatest common divisor with 90 kHz. This means that if I could have found an even higher GCD frequency, I would have used that instead.

    So the idea was that, for a 30 fps video, the engine would know to render a frame on every 200th timer interrupt. I eventually realized that servicing 6000 timer interrupts every second would incur a ridiculous amount of overhead. After that, my engine’s philosophy was to set a timer to fire for the next frame while beginning to process the current frame. I.e., when rendering a frame, set a timer to call back in 1/30th of a second. That worked a lot better.

    As I was still keen on 8-bit paletted image codecs at the time (especially since they were simple and small for bootstrapping this project), I got to use output palette images directly thanks to the Dreamcast’s paletted textures. So that was exciting. The engine didn’t need to convert the paletted images to a different colorspace before rendering. However, I seem to recall that the Dreamcast’s PowerVR graphics hardware required that 8-bit textures be twiddled/swizzled. Thus, it was still required to manipulate the 8-bit image before rendering.

    I made good progress on this player concept. However, a huge blocker for me was that I didn’t know how to make a proper user interface for the media player. Obviously, programming the Dreamcast occurred at a very low level (at least with the approach I was using), so there were no UI widgets easily available.

    This was circa 2003. I assumed there must have been some embedded UI widget libraries with amenable open source licenses that I could leverage. I remember searching and checking out a library named libSTK. I think STK stood for “set-top toolkit” and was positioned specifically for doing things like media player UIs on low-spec embedded computing devices. The domain hosting the project is no longer useful but this appears to be a backup of the core code.

    It sounded promising, but the libSTK developers had a different definition of “low-spec embedded” device than I did. I seem to recall that they were targeting something along with likes of a Pentium III clocked at 800 MHz with 128 MB RAM. The Dreamcast, by contrast, has a 200 MHz SH-4 CPU and 16 MB RAM. LibSTK was also authored in C++ and leveraged the Boost library (my first exposure to that code), and this all had the effect of making binaries quite large while I was trying to keep the player in lean C.

    Regrettably, I never made any serious progress on a proper user interface. I think that’s when the player effort ran out of steam.

    The Code
    So, that’s another project that I never got around to finishing or publishing. I was able to find the source code so I decided to toss it up on github, along with 2 old architecture outlines that I was able to dig up. It looks like I was starting small, just porting over a few of the demuxers and decoders that I knew well.

    I’m wondering if it would still be as straightforward to separate out such components now, more than 13 years later ?

    The post Writing A Dreamcast Media Player first appeared on Breaking Eggs And Making Omelettes.