Measuring the rate of DNA repair in HD cells

Blog post by Dr. Tamara Maiuri

Last time, I wrote about how the system to pull down huntingtin and its associated DNA repair proteins works in cells from an HD patient. The drawback to using these cells is that they grow very slowly and don’t yield much protein. So, the last few weeks have been spent stock piling cells. My stash is growing, but it will be several more weeks before I have enough material to send for mass spectrometry, which will give us a list of all the proteins that interact with huntingtin under conditions of oxidative DNA damage.

In the meantime, I’ve been thinking about what we’re going to do with the information we get. What do we want to know about the huntingtin interacting proteins we identify?

Well, we know that DNA repair is an important aspect of disease progression. The age at which people get sick, and other signs of progression including brain structure, are affected by small changes in peoples’ DNA repair genes. What’s more, the huntingtin protein acts as a scaffold for DNA repair proteins. Maybe this job is affected by the expansion that causes HD.

Once we have a list of proteins that interact with huntingtin upon DNA damage, we want to know if, and how, they affect the DNA repair process in HD cells. What we need is a way to measure the DNA repair rates in HD patient cells. Then we can ask: if we tweak the proteins that interact with huntingtin upon oxidative DNA damage, what happens to the repair rates? That way, down the road, we could use those proteins as drug targets to improve the DNA repair situation.

But one step at a time. First we need the DNA repair measuring stick. There are a few options for this, but I recently came across a cool one. It works by first damaging DNA in a test tube, then introducing it into cells, then measuring how well the cells repair the damage in order to express a gene on the DNA. The gene encodes green fluorescent protein (GFP), so you can measure expression (as a proxy for DNA repair) by how many cells are glowing green.

Question 1: Does the system even work?

The first thing I did was to try this in the easy-to-use HEK293 cells (HD patient fibroblasts don’t take up DNA very easily, and this will be a challenge to overcome down the road!). The system worked quite nicely: the cells with damaged DNA didn’t express as much GFP as those with undamaged DNA, as expected. Also, repair of the DNA was slowed down by a drug called Veliparib, which inhibits the DNA repair protein PARP. See the results on Zenodo.

Question 2: Is there a difference in repair rates between normal and HD cells?

Once again, before tackling this question in HD patient cells, I opted for the easier-to-work-with mouse cells while I set up the system. In the first attempt (deposited to Zenodo), there were not enough HD cells recovered. From the few cells recovered, it looked like there might be a decreased DNA repair rate in the HD cells compared to the normal cells.

In the second attempt, enough cells were recovered to tell what was going on. The HD cells did in fact have a lower DNA repair rate, but inhibiting the DNA repair protein PARP had no effect (results on Zenodo). This could mean one of two things: either the difference we see between normal and HD cells is not because of DNA repair rates (which would be a bummer), or PARP inhibition is not working under these conditions. I’m hoping for the latter, and will try some different strategies to make sure we’re dealing with true DNA repair rates here. If we are, then we can use this method to further investigate the huntingtin interacting proteins we identify, and how they cooperate with huntingtin in the DNA repair process.

There are some other ways we can look at DNA repair rates in cells, as well as comparing the dynamics of the huntingtin protein (getting to and from damaged DNA) in normal versus HD cells. I will tackle some of those approaches and report them in the coming weeks.

This work is funded by the HDSA Berman/Topper HD Career Development Fellowship.

So you want to write a CIHR grant…

koninck_salomon-zzz-an_old_scholarMy Project applications are complete. Decided to offer some sage (old guy) advice on technical aspects of writing a CIHR grant, or any grant proposal.

The Equipment…

A good part of the job of any research scientist is writing. This is why I’m surprised to see people still working as they did in grad school, on the venerable laptop.  I rarely use a laptop, the screens are too small, they force poor ergonomics, they have iffy keyboards, they are near impossible to generate figures on, they break to easily or get stolen.

I use a variable-height desk (motorized, from IKEA), a desktop computer with redundant backup, battery UPS power supply, a high quality gaming keyboard with mechanical key action -they cost a lot. The actual brand and OS is irrelevant, for reasons that will be obvious below.  The monitor is a 4K screen at 39″ -it’s huge, many pixels, because we do a lot of image analysis and cell biology. Another alternative is two or three 1080p screens.

The Timing…

Scientists tend to procrastinate. I think it’s inherent to the overworked lifestyle of the scientific mind, but it is the single worst habit in science, next to the removal of the bottom half of error bars in bar graphs (it’s wrong, it’s misleading, just stop doing it).

Step one is to set a timeline of grant writing activities, with a goal of completion of the entire proposal one week before institutional deadline. This means the proposal sits unread for a week, before  final read prior to CIHR submission. Waiting for some awesome preliminary data? Bad practice, and this typically leads to poor quality preliminary data. Preliminary data does not mean poor data read with rose-colored glasses, it means publication-quality figures not yet published. Many proposals suffer from the idea that poor quality data is acceptable as ‘preliminary’.

It’s critical to leave the proposal for a week and re-read it. Can’t be done if you’re in the last hours to deadline.

The Software…

Until recently, I followed the classic paradigm of MS Word/Endnote/Reference manager/some draw program. The problem with this software is that they have had a far too comfortable market share for too long, the competition is gone, and we are left with mediocrity that can often be unstable. How many times have we be stuck for 30 minutes trying to get Endnote to see a reference? Ever try to embed figures in MS Word? It’s stochastic, at best. Does Microsoft care? Nope. But, there are inherent anachronisms inherent to this software:  poor third party cross-talk and instability (sometimes, the file is corrupted and just cannot be rescued),  file sharing is cumbersome and poorly implemented, and you can lose hours/days of work easily.

I’ve settled on the package of: MS Powerpoint, Google Docs, and the Google docs addon: Paperpile.  Last, a simple screen capture utility like Windows Snipping tool.

We’ve all had those nightmares…a power surge in your lab blows out you desktop, and on the way home you drop your laptop, two days before final deadline. This can have many versions in the nightmare dreamscape, including meteor hitting your office and an ominous black raven pecking out your laptop keyboard. Sure, it can all be fixed with time, but time has run out…

Google Docs is cloud-based in real time (MS now has this with Office), so the actual input device is irrelevant, and nothing is lost. Sure, as I write this, someone undoubtedly hacked the server and the world is in a tailspin, but the truly paranoid can backup to two cloud sources. The best parts of Google Docs are the integration of Paperpile and Document sharing.

Paperpile takes the Google Scholar engine and mates it seamlessly with Docs. For years, I would struggle with the AWFUL Endnote/RefManager search by bouncing back and forth between Pubmed, Google and the software, often having to build a citation from scratch. Tedious.

Once you install full Paperpile (just pay for it), wonderful things happen in your Browser: any Google search or Pubmed search items have a button appear beside them.

paperpile

Click and it’s in your library, and references are never missed (especially by PMID).

You can format references in any way (should be Nature -less space), because of the insanely stupid publishing industry that cannot settle on a single reference format (my theory is they also secretly work for the Canada “common” (LOL) CV.

For figure mockups, I use Powerpoint, with tools for bitmap corrections (crop, brightness, contrast, etc.). All aspects of figures are dropped into one PPT file, mocked up then captured as bitmap using the snipping tool.

sample figure

You can even adjust levels again within Docs. Full figures in minutes.

The figure bitmap is then pasted into Docs, set as “wrap text” with O margins. What you see is what you will get in the final PDF generated by Docs.  Very reliable. Magazine style, scalable figures.

The Writers…

Most PIs sit in their luxurious ivory tower offices and write, The Great Canadian Proposal™,  like some deranged hermit working on a manifesto linking mayonnaise, immigration and global climate change.

Man… it sounds all so awesome. Totally clear.

I review a lot of proposals, thousands, between CIHR, NIH, and HSC, and some are as clear as mud, because it is one writer caught in their own feedback loop of awesomeness, often empowered by a “high impact” publication that somehow validates everything for another $1M.

USE YOUR LAB. It’s a critical training tool to teach you trainees how to write in the bizarre language of science. We blather on like idiots, jumping From Acronym to Acronym (JFATA), or even better, making up our own acronyms (MUOOA), JFATA and MUOOA enough and the proposal is FUBAR. The problem is sometimes acronyms overlap in different areas -this can confuse a reader quickly.

Interestingly, we tend to write superfluously as if we are speaking aloud and trying to impress someone at a business pitch. This is wordiness. Interestingly, it leads to words like, interestingly. If you have to state one observation alone as “interesting”, your proposal is in big trouble.

The proposal at second draft should be shared to the lab. I mean the whole lab, from undergrads to PDFs. In Google Docs, this means in real time you can see who is simultaneously reading and commenting, with different color cursors and comments are a click to “resolve and go away”.

DUMB IT DOWN. It is very likely you do not have an expert reading your grant in Canada. We are a tiny country of mostly cancer researchers in biomedical science, and thanks to CIHR reforms, anyone can still be reading your proposal as a non-scientist, and scoring it (it sounds stupid when you say it aloud). Thus, it should be understandable to any undergrad working in the lab. The worst thing you can do is get a colleague in the same research field reading drafts -this is still the Feedback Loop of Awesomeness (FLOA). My lab uses some biophysics maybe three guys in Canada have ever even heard of -this gets lost fast.

Figures: no more than 8, references, no more than 100. I once saw a record-breaking proposal with >40 figures and >400 references, statements with >12 reference tags. I forget what it was about, but it should have been about obsessive compulsive disorder (OCD)12,23,34-56, 42, 187, 199-204, 206, 208, 210-14.

One easy killer comment is if they need so many data figures, why not just publish it. Thankfully, CIHR put the end to this with 10 page totals.

Lessons from the Triage pile….

If you are going to propose new methodology, make sure you know what you are doing. You are NOT going to CRISPR edit 45 genes and validate. Do not suggest FRET experiments unless you understand the caveats.

The Big killers:

The Amazing HEK293 Cell. Derived by Frank Graham at McMaster. There should be a moratorium on HEK and HeLa cells for anything other than over-expression of proteins for purification, they neither represent normal cells, nor cancer cells, definitely not neuronal cells, and they are not the route to translational studies in humans. They have shattered, hyper-variable, polyploid genomes with both two many chromosomal anomalies to list, and are never the same, even within one lab.  They are far from human. There are better alternatives for any disease. See ATCC or Coriell, however, Coriell is losing support because of so much scientific disinterest, no doubt because cell biology papers in major journals still publish studies of cell biology from one transformed and immortalized cell line and call it normal.

Pharmacology overdose. Take a “specific” drug with an established EC50, apply it at 100-10,000X. One wonders if these researchers, when they get a headache, take two aspirin or just quaff the whole bottle and hope for the best. These typical studies look at live versus very dead cells, and make specific conclusions, i.e.:

it-is-fresh

We measured NFKb levels, and they were altered, therefore this model  died from a  defective NFKb signaling pathway (also works for almost all clinical epidemiology studies) . 

I’ll just look busy for 5 years…

Descriptive aims, also known as Yadda Yadda syndrome. The vague listing of stuff to do because everyone else does this. This is Canada, do this and you will get scooped by post doc #46-12b at some institutional Death Star in the US. More importantly, neither innovative nor interesting.

If I ask for less money, It will have a better Chance…

This is a new consequence of CIHR reforms, where PIs typically funded at $40k projects are now shooting for the moon at $100K. A single tech, PDF and 2 student full CIHR project is a $240,000+ proposal. Our dollar is no longer at par, which means expendables are now 25% more. My last CIHR operating grant period spend >$30,000 in publication fees. Bad budget requests can indicate the PI does not know the real costs of the Project, nor will be able to complete. Some pencil pusher will cut your budget, but no one will increase it for you.

My Model is Best Model, because.

Models systems have utility for most diseases, they also have caveats, and not all models work for all diseases, and you cannot use a single successful model in one disease and blindly justify it across your focus. There are pathways entirely absent in many model systems relative to humans.

Some reviewers have the opinion that the very  poor success rate of genetic diseases research (we’ve got lots of genes, no therapies) has to do with over-dependence on animal model systems. Mice are not humans. They are shorter.

mouse

Pulling huntingtin-associated DNA repair proteins out of HD patient cells

Blog post by Dr. Tamara Maiuri

Now that the cross-linking, fractionation, and oxidative stress conditions have been worked out to pull huntingtin and its interactors out of the “easy-to-work-with” HEK293 cells, it’s time to try out the system in HD patient fibroblasts.

These cells come from real HD patients and controls such as a sibling or spouse. The Truant lab is working on developing a panel of cell lines from different patients with different CAG lengths in their huntingtin genes. We have immortalized the cell lines with hTERT, which allows us to grow them indefinitely—a great resource to be shared with the HD research community.

One thing we and others have noticed is that the cells from HD patients grow much faster than the control cells. Both types of fibroblasts grow quite slowly and don’t yield much protein to work with. This means a fair bit of waiting around for cells to grow! Since the HD cells grow faster, there’s always more of them. For this reason, I used HD cells, bearing 43 CAG repeats, to test out the co-immunoprecipitation conditions previously worked out in HEK293 cells.

In this experiment deposited to Zenodo, I found that the conditions work quite well to pull huntingtin and its associated DNA repair proteins out of HD patient fibroblasts. One problem I ran into (other than the sloth-velocity growth rate), is that the mab2166 antibody doesn’t pull down huntingtin very well out of 3NP-treated fibroblasts. This was confirmed in a second experiment, also added to Zenodo.

Luckily, mab2166 is not the only huntingtin-specific antibody available. I compared the ability of another antibody called EPR5526, to pull down huntingtin and its associated DNA repair proteins. I found that even though EPR5526 pulled out slightly less huntingtin protein, there was more of the DNA repair protein APE1 associated with that pool of huntingtin protein. It could be that the EPR5526 antibody better recognizes huntingtin in the conformation it takes upon oxidative stress (previous work from our lab showed that huntingtin changes shape upon oxidation, which may also explain why mab2166 doesn’t recognize it as well from 3NP-treated cells). Whatever the reason, it looks like EPR5526 is the way to go for this application.

So the preliminary work is done and I now have the conditions right to identify the proteins associated with huntingtin upon oxidative stress by mass spectrometry. The slow growth of the fibroblasts is a major limiting factor, however. After speaking with the Sick Kids mass spec facility about how much protein is needed, I project it will take several months of growing up cells, treating them with 3NP, and freezing them down until I can collect enough material. Look for an update on this front in the fall!

In the meantime, I’ll be looking ahead at how we’re going to analyze the huntingtin interactors that we identify. The point of this project is to find proteins important to the DNA repair process in HD. So I need to find a meaningful way to measure differences in DNA repair between normal and HD cells, then test the effects of the interacting proteins on those differences. We’ve already shown there’s more DNA damage in HD fibroblasts compared to control cells, using a “comet assay”. But that system is labour intensive and not the best choice going forward. I’ve been exploring another way to test DNA repair in cells called a “GFP reactivation assay”. I’ll report my progress in forthcoming blog posts.

Lessons to Young Investigators

From the Foundation review process, from Stage one to Stage two,  I learned some important lessons that younger PIs starting out lab might want to consider. Careful for those on a low-sodium diet, because this is both bitter and salty:

tenor

  1. Peer review membership. I sat on CIHR panels continuously for ten years, both operating and focused RFAs, I ran two annual HSC competitions for ten years, I sat on NIH study sections. This means up to six full panels a year. These are tremendous learning tools, and should be taken very seriously, but do not expect any actual recognition in any way for this work that will affect your support. All this had literally no impact with reviewers. I advise to do this, but no more than once a year, on years of funding. Keep in mind doing a proper peer review job at a panel, with discussion face-to-face, is over 80 hours of effort (really), this time is much better spent applying for funding or publishing. This is why some reviewers who agreed to review in Project and Foundation did not bother to return any reviews at all, and why instead of minimal seven opinions, we are lucky if we see three.
  2. Publications. Publish everything, everywhere, quantity certainly is more impactful than quality. For the review process, it’s easier to count than to read. One senior CIHR Foundation investigator listed over 840 manuscripts in a 30 year career. That’s one paper, every two weeks, without break, for 30 years. These are games we play in science. Of course, this is the exact opposite of what Nobel Laureates will suggest, as they point out zero correlation of publication numbers or impact factors with actual practical impact on scientific progress. These metrics are gamed (notice a lot of free Nature journals in your inbox? that affects Impact Factor).
  3. Knowledge Translation and medical translation. This is an important concept: that it is pointless to write template letters to MPs to increase support when the taxpayer in Canada does not know anything about what is going on with their money, nor does a typical MP. However, I now quote Princeton Professor Harry Frankfurt, who wrote the NY Times best seller treatise: “On Bullshit“, a book given to me by Professor Emeritus Allan Tobin, of the UCLA Brain Research Institute:

One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share. But we tend to take the situation for granted. Most people are rather confident of their ability to recognize bullshit and to avoid being taken in by it. So the phenomenon has not aroused much deliberate concern. We have no clear understanding of what bullshit is, why there is so much of it, or what functions it serves. And we lack a conscientiously developed appreciation of what it means to us. In other words, as Harry Frankfurt writes, “we have no theory.”

Bullshitting is not lying.

Rather, bullshitters seek to convey a certain impression of themselves without being concerned about whether anything at all is true. They quietly change the rules governing their end of the conversation so that claims about truth and falsity are irrelevant. Frankfurt concludes that although bullshit can take many innocent forms, excessive indulgence in it can eventually undermine the practitioner’s capacity to tell the truth in a way that lying does not. Liars at least acknowledge that it matters what is true. By virtue of this, Frankfurt writes, bullshit is a greater enemy of the truth than lies are.

The result of this in science with KT and medicine is there is so much of this on proposals, any real KT or medical translational efforts will be simply regarded as more bullshit. This is not fraud or lying, it’s crafting the language institutions seek with buzzwords and new methods, terms like: inter-disciplinary, knowledge translation, innovation.

For the record, I have given ~30 lay lectures in HD Families at National conferences or local HD chapters, and this is a fantastic experience, as is being the External Scientific Editor of HDBuzz.net. I will continue to do this for as long as anyone asks. My trainees all attend HD family days, and write articles for HD Buzz since 2011, in what is true knowledge translation, but what was referred to as “no evidence of knowledge translational activities” by my peers.

4. Applying for Grants. The most successful CDN scientists apply to every grant competition, every time, whether they need the money or not, because at most CIHR panels a few high impact manuscripts could snowball into 2- or 3 CIHR grants, whether there is 2-3 times the productivity or not. Unethical? certainly, but these investigators then get rewarded with massive Foundation awards and merit increases. Will anyone seriously check to see if you actually completed aims from the last support period? Nope. Sure, a report is filed years after the project has concluded, but I have yet to see any evidence this is actually read.

 

The CIHR Foundation scheme

In September 2016 CIHR Foundation grant competition released their final reviews 11 months later, August 4th, 2017. This was just days before the 2018 registration deadline, although Stage 1 and Stage 2 applicants were informed months ago of their progress.

There were 600 applications, of which 234 were accepted to Stage 2, but only 229 actually submitted a full Stage 2 application. I was considering not applying further, despite Stage 2 acceptanceThese were very limited space grants. The Foundation scheme is was designed to support established  investigators with seven years of support, collating multiple grants into one grant, with no opportunity to apply for more funding as a PI over those seven years. The idea was to reduce application load to the CIHR, and have the most productive mid-career scientists not bogged down with continually writing grants. But, something changed along this path…

At some point in the planning, Early Career Investigators had a significant portion of the funds reserved for them with 5 year grants, the same time frame as Project grants. This was never the intent of Foundation, and this cohort was dropped for 2018. The CIHR, at that time led by Dr. Alain Beaudet, wanted to stress aspects of research community not addressed by the old CIHR operating grants: community engagement, knowledge translation (KT), participation in peer review, and medical impact of basic science research. Part of this focus was the result of consultations from the NIH in the US. However, when the review scheme of anonymous online participation from multiple reviewers was presented, the NIH warned the CIHR this was a bad idea, and would promote superficial, trivial reviews and cronyism. This was also the feedback from Senior CIHR funded investigators during the consultation phase. This was ignored.

The CIHR allowed age to be considered for Early and Mid Career investigators, declaring anyone over 16 years as PI a Senior investigator. So, while there are age brackets for the younger investigators, there would be no age limits for the senior investigators. In the first pilot scheme competition, seven men over 70 years of age were funded in multi-million dollar grants, essentially the equivalent of 28-30 Project grants. Their 30 and 40 year CVs certainly out-shined  the CVs of 16 year PIs.

With new management of CIHR, the Foundation Scheme has been reduced considerably, but still represents a significant portion of the total CIHR budget at $125M, as $75M was moved to Projects, to address the poor funding rates there. The plan in 2018 is to support 40 investigators with $125M, over $3M each, or in other words, about 140 old Operating grants now concentrated down to just 40 PIs. The ECI cohort was eliminated.  So Foundation is now focused entirely on very senior citizens, most of which benefited heavily at mid and early career from mandatory retirement at 65, which was eliminated in Canada in the mid-2000s. As a reviewer in the Foundation scheme pilot phase, I saw some very impressive publication and trainee records, but certainly a tailing off of productivity above age 60, but that assessment no longer was relevant. Knowledge translation now meant self-promotion press releases or CBC interviews about the next “5 years to a cure” story, which Academics excel at. CIHR or NIH panel membership made no significant difference to scores. Thus, we now have a significant number of Senior investigators rescued from retirement and parking themselves at the highest salary brackets in academic institutions, with little or no incentive to actually produce anything, as they are unlikely to ever apply for funding again, meanwhile, the most productive 40-60 year old PIs are being hollowed out with no Foundation, and Project grants now in the hands of social media type peer review.

The net result in Foundation is that we may see the lowest ever productivity-per-dollar in CDN research history, but it will take 7-10 years to actually see the data, and that’s some other government’s problem.

As part of the Truant lab’s Open Science initiative, I will post my Foundation reviews online. I think people can easily understand the decision to return to face-to-face panels when they see a textbook example of superficial peer review and the failure of the Projects and Foundation schemes leading to cancellation and partial cancellation. There are no SO notes on any live discussion. Despite the Os and O+s, The proposal ranked very poorly. In fact, the lowest ranking of any proposal I have every written to any agency in 18 years.

Reviewer one:

Quality of the Program/Qualité du programme
Criterion/Critère: Research Concept/Idée de recherche
Rating/Cote: O+
Strengths/Forces: The applicant aims to reveal novel insights how the polyglutamine expansion in Huntingtin protein leads to Huntington’s Disease (HD) and to develop analog compounds on a preclinical pipeline of his own lead hit, N6FFA/kinetin.
The applicant made a breakthrough discovery that huntingtin phosphorylation at Ser13 and Ser16 can be modulated by small-molecule drugs, which may have therapeutic potential in Huntington’s disease (Atwal et al., 2011, Nat. Chem. Biol.).
This proposal is based on this discovery. I am impressed by the originality of this proposal.
Weaknesses/Faiblesses: None
Criterion/Critère: Research Approach/Approche de recherche
Rating/Cote: O
Strengths/Forces: The applicant proposes experiments that can lead to the discovery of new drugs, which seems
exciting. The access to patients’ samples is plus.
Weaknesses/Faiblesses: The lack of in vivo model makes it difficult to evaluate the effects of potential drugs.

(The plan clearly states the use of two mouse models with two support letters, both of which are good models of HD)
The description of genetic modification of human cells including CRISPR/Cas9 mediated knockdown is too brief and the feasibility is not well addressed.

(The proposal does not suggest CRISPR knockouts or “knockdowns”, we defined experiments to create isogenic HD human primary cell lines, the reviewer clearly does not understand CRISPR/Cas9, how it works, the results, or likely what isogenic even means.)

Quality of the Expertise, Experience, and Resources/Qualité de l’expertise, de l’expérience et des ressources
Criterion/Critère: Expertise/Expertise
Rating/Cote: O
Strengths/Forces: The applicant has strength in chemical biology.
Weaknesses/Faiblesses: The applicant stated the conversion of human cell lines into neurons, which is not easy and he did not show feasibility on this.

(the methods were fully referenced)
Criterion/Critère: Mentorship and Training/Mentorat et formation
Rating/Cote: E++
Strengths/Forces: The applicant has trained one pdf, one clinician scientist and 19 graduate students. These are good
numbers.
Weaknesses/Faiblesses: The applicant does not provide or mention a tracking record of his trainees.

There is literally no requirement for this at CIHR, but this was listed in the CV module. and somehow, the other reviewer picked over the details which apparently do not exist.

Reviewer two:

This was the most detailed review. This is a proper review.

Quality of the Program/Qualité du programme
Criterion/Critère: Research Concept/Idée de recherche
Rating/Cote: E++
Strengths/Forces: The applicant plans to use human-derived fibroblasts from Huntington disease patients and controls to screen for drugs that correct deficient phosphorylation on ser13 and ser16 in the N-terminal region (N17) of the mutant (polyglutamine expanded) huntingtin protein. He will also screen for changes in DNA repair pathways associated with earlier than expected or later than expected age of onset, based on the CAG repeat length in the HD gene. In the last 2 sentences of this section, two additional goals are mentioned – determining whether mutant huntingtin itself or ROS load
can trigger somatic expansion of the CAG repeat in the HD gene.
Use of human cells is a major strength.
The focus on DNA repair and ROS is a strength, given the recent results of a large GWA study from Gusella and McDonald (Cell 2015), indicating that genes associated with these functions are modulators of HD age of onset.
The techniques the applicant has developed for screening include automated image processing to detect localization of huntingtin to the nucleus, which is also a major strength towards an unbiased approach.
Weaknesses/Faiblesses: The goals and objectives of the program are not well-defined or well-articulated. There are a couple of sentences buried in the narrative that mention goals, but nothing more and no objectives are listed. Introducing
somatic CAG expansion in the last 2 sentences does not build conceptual coherence.
It is difficult to follow the rationale or conceptual drivers of this program because it is so highly technical with no basic explanations of what is being measured. It is not clear what Fig. 2 is showing – what principle components are being
compared? Is this based on imaging data, and what about the imaging data is being used to derive the PCA? N6FFA is not defined other than as a “product of DNA base excision” – as such, it is unclear why “oral loading” would normalize huntingtin phosphorylation?
In Fig. 3 – what is “N18”? Is this blot showing phosphorylation state of huntingtin?

This is literally the focus of eight years of manuscripts. No one reads the references.
Criterion/Critère: Research Approach/Approche de recherche
Rating/Cote: EStrengths/Forces: The impression from reading this section (although it is very possible it has been misunderstood) is that the applicant will use an imaging system that is automated to allow acquisition from thousands of cells, which is a
strength; the patterns will be determined using supervised machine learning, another strength, to avoid subjectivity in the analysis of images and facilitate high throughput. Clusters of patterns will be found based on this analysis, and then subjected to unsupervised machine learning and principle component analysis.

The reviewer clearly does not understand blind PCA, or the technology, and is looking for experimental details in a seven year plan in a  few pages. The instruction were to clearly provide a programmatic approach, the reviewer is expecting Operating Grant detail. Impossible in this minimal format. 

Weaknesses/Faiblesses: Again, the lack of organization or clear explanation makes it very difficult to understand what
is to be done. How will the applicant define late onset or slow progressers among the HD patient cohort?

(This is outlined)
The applicant aims to create at least 50 lines (from patients or controls), but this is a small number to determine factors associated with later onset or slow progression (those patients will be a very small minority of the total), in particular since each cell line will have differences based on genetic/epigenetic background and varying CAG length.

The reviewer somehow thinks we are doing GWAS on cell lines? The reviewer does not understand outlier studies, or the point of outlier definition.
What is a “photosensitivity side effect”? This is suggested as a key feature of drugs that may normalize phosphorylation of mutant huntingtin, but it is not defined nor is it explained how this might be related to efficacy.

A reviewer of a biomedical research agency in charge of close to $1B in spending does not know what drug photosensitivity is.  Let’s hope this reviewer is not a clinician. 
Will CHDI be an industry partner on this program? If so, that should be clearly indicated at the front of the proposal.

CHDI is a Foundation, not an industry. Fully explained. Eight years of partnership listed in CV. 
How will the Enroll-HD data set be used in this program?

This answers the earlier queries.

There is no mention of the lab team who will be carrying out experiments described in the program, so difficult to judge feasibility.

This is all delineated in the budget module, where it is supposed to be , yet obviously unread.

In most significant contributions section, he claims “In 2012 I described the robust effects of GM1 ganglioside on a HD mouse..” – in fact, this work was led and primarily accomplished in the lab of the senior, corresponding author, Prof. Sipione, at U Alberta. Although the applicant is a co-author, this statement is misleading as to his role.

We actually provided significant expertise, and a critical reagent with months of work, as indicated by authorships, and a second manuscript in which I was CI and Dr. Sipione an important contributor that same year. The point, entirely missed, is that my lab is the only lab in the world, to date, to show small molecule efficacy in HD models by two different pathways, with direct target engagement readout. 

Weaknesses/Faiblesses: Although applicant mentions that he expects 3 quality publications from each PhD student, one of his students (redacted here -inappropriate to list this name here, or in the review) graduated without authoring a paper (based on pubs listed in CCV and also Pubmed search).
The applicant has one ongoing PDF who is finding it challenging to obtain a job.

The student mentioned was the result of me inheriting this student as the result of failure to tenure a local PI. The student would have otherwise been tossed out with nothing after three years. The thesis did encompass three publishable chapters, but the student left without any effort to actually submit a manuscript. Nice of the reviewer to ignore that the remaining seven PhDs actually averaged more than 5 publications each.

As for a PDF finding it challenging to find a job -this reviewer is obviously not active in science for the last decade, and this PDF just won a three-year full Scholarship support from an international competition, based on data disregarded by CIHR. This level of nitpicking is outstanding. When you cannot assess science, focus on this, but ignore peer review or true KT efforts.

Weaknesses/Faiblesses: There is no mention of support at the University level, as to whether the applicant’s field of research and/or the Biochem/Chem Biology program is a priority.

I have to agree with this, McMaster has never nominated me for any awards in 18 years, despite continual, uninterrupted funding in the millions. 

As for the priority – I guess this reviewer does not know what a Full Professorship means at a University means. It means we do it all. Their PDFs come from somewhere.

Reviewers 3 and 4:

I’ll summarize, as both have “none” listed under weakness, yet scored in the lowest quartile possible. In total, 15 minutes of effort is obvious. But one reviewer, obviously not a scientist, was a standout:

1) The applicant should focus on describing what he discovered in his many publications and how the discoveries impact the field rather than hyping the fact that he has been the first to do something, which otherwise seems boastful.

Oh dear, being boastful on a competition being reviewed to distinguish and rank applicants, we can’t have that. 

2) The applicant highlights the fact that he is fighting the prevailing dogma in the field but he never breaks down how that dogma is misleading our scientific progress.
Is everything that everyone else discovered wrong in HD research? (hard to believe)

I guess the reviewer missed the lines that there is literally nothing therapeutic in HD research in 25 years due to a dogmatic approach of protein homeostatic mechanisms being the trigger. And yes, GWAS, and failure to treat this disease in 25 years, has told us that certainly, most HD research is wrong, the majority irreproducible, but that won’t stop older investigators from getting millions more in support.

Instead the applicant should describe in greater detail how his specific discoveries remedy the situation.

Literally the entire research plan section. Incredible.

3) The applicant should break down the proposal into discrete aims. The proposal currently reads like a continuous stream
of consciousness.
Even after reading the application multiple times, i have a hard time summarizing what exactly he intends to do.
The story telling aspect of this application is in the bottom half of my assigned grants.

The proposal literally has three delineated aims and sub-aims. I think the reviewer just read the summary and skipped significance.

Weaknesses/Faiblesses: Grantsmanship is a problem.
The first half page of the research approach has no leadoff.

Because the previous section is supposed to highlight significance? This small section was “research plan”, in which, surprisingly, I outlined the research plan.

 
The reviewer is forced to piece things together by themselves.

i.e read all sections. 
There are obvious disconnected factiods listed that a reader has no hint of how they fit into the larger narrative.
For example Our own work, in (Fig. 6) shows that huntingtin N17 and P53 contain a similar CK2 site, which is unique to just these two proteins.
What does this have to do with anythings(sic)?

Literally explained in significance section, which was clearly unread, and the central hypothesis of the proposal.

 

Testing oxidative stress conditions for huntingtin-DNA repair protein interactions

Blog post by Dr. Tamara Maiuri

In the first blog post reporting on the Oxidative Stress Interactome Project, funded by the HDSA Berman/Topper HD Career Advancement Fellowship, I promised to pull huntingtin interactors out of patient-derived skin cells. And I will! These are a much better model than the “easier-to-work-with” HEK293 cell model. But a few more optimization steps are necessary. In the process of working out the fractionation and crosslinking conditions, I ran into some irreproducibility with the amount of DNA repair proteins coming down with huntingtin under oxidizing conditions. There are a few DNA repair proteins, such as APE1 and XRCC1, that we already know interact with huntingtin upon DNA oxidative damage. I use these as a gauge for whether the experimental conditions are right. Once everything is in place, we’ll get to the purpose of this project: to identify more of these oxidation-dependent huntingtin interactors in the hopes of targeting them for HD therapy.

These DNA repair proteins interact with huntingtin under conditions of oxidative stress. So maybe the irreproducibility problem comes down to the source of oxidative stress: it’s not consistent enough from experiment to experiment. I’ve been using potassium bromate, but it loses its potency rather quickly, has to be prepared fresh each time, and the stock has to be replaced often—not the best situation for consistency and reproducibility.

In this oxidative stress optimization experiment (posted on Zenodo), I tried hydrogen peroxide instead of potassium bromate. It’s very difficult to get hydrogen peroxide shipped to the lab, and we’ve been waiting for it to clear customs since January! So I used the old stock. Not too surprisingly, it wasn’t very good at inducing DNA damage-related huntingtin protein interactions. But I decided to post about this experiment because I learned something else: the fractionation method I’ve been using releases proteins from the nucleus, where DNA is stored, in the first step. And the huntingtin interactors are enriched in this fraction. This simplifies the fractionation process even further, because I can omit the second step from now on.

Next on the list of oxidative stress agents is 3-nitropropionic acid (or 3NP). This is a good one to try because it mimics what is happening in HD neurons by damaging mitochondria. In fact, mice treated with 3NP end up with degeneration in the same brain regions as HD patients, and show similar symptoms. 3NP worked quite nicely to induce huntingtin interaction with our “positive control” DNA repair proteins, APE1 and XRCC1 (check out the results on Zenodo).

Skin cells from patients are one of the best cell-based models we have to study HD. But they grow very slowly and don’t yield much protein to work with. So we have to wait for them to grow. In the coming weeks I will have enough cells to test the experimental system I’ve worked out in HEK293 cells. Let’s hope for a smooth transition!

 

 

Dogma and blinders

Huntingtin is expressed in every cell in the human body. Just shutting down protein expression has been regarded as the singular therapeutic goal in HD, this has become as dogmatic in 2017 as protein aggregation was 1996-.

This work published today suggests long term removal of huntingtin may not be a great idea. We have to enforce that any huntingtin lowering therapy must be reversible and dosable and ideally allele-specific.

This work published two days ago indicates that C9orf72-mediated ALS pathology is through inhibition of ATM kinase, the same complex we defined as inhibited in HD.

It’s time to break down the silos and take off the blinders and promote cross-talk across neurodegeneration.  What was heartening about the Alzforum article is that a Alzheimer’s disease writer, was interviewing a HD researcher, about an ALS manuscript. Hopefully we’ll see more of this.

Productivity-Blinders

Real Time Report: The Oxidative Stress Interactome Project

Blog post by Dr. Tamara Maiuri

We’re going live! Welcome to the first of a series of blog posts aiming to report our findings in real time. This post reports the first steps in the Oxidative Stress Interactome Project, funded by the HDSA Berman/Topper HD Career Advancement Fellowship.

The Oxidative Stress Interactome Project

Oxidative stress is something that happens in our brains as we age, and the inability to deal with it properly has been linked to neurodegenerative diseases including HD as well as Alzheimer’s and Parkinson’s. We know that the huntingtin protein interacts with many other proteins in response to oxidative stress. These proteins come together to repair DNA damaged by oxidation. What if some of those protein-protein interactions have gone wrong when the huntingtin protein is expanded, as it is in HD? That’s what we aim to find out. Then we’ll look for drugs that might fix the problem.

Setting The Scene (aka optimizing experimental set-up)

I will be purifying the huntingtin protein, and any other proteins interacting with it, from skin cells that came from real HD patients. The skin cells are grown in a plastic dish. First, I’ll treat the cells with oxidants such as hydrogen peroxide or potassium bromate, to mimic the oxidation that’s happening in the aging brain. Then I’ll compare the list of “huntingtin interactors” in the presence or absence of oxidation, to see which ones are recruited to the job under conditions of oxidative stress. Down the road, I will compare the normal and expanded huntingtin protein. But one step at a time…

The interacting proteins will be identified by a technique called mass spectrometry. I’ll explain mass spec in a future blog post, because before we get to that step, several things need to be running smoothly. For example, there’s no sense sending your samples for mass spec analysis if not enough huntingtin (and its interactors) was purified from cells in the first place!

To set the scene, I tested two aspects of the experiment:

  • How to break up the cells and get at the protein (fractionation)
  • How to keep the proteins from coming apart during the purification process (crosslinking)

In a fractionation optimization experiment that you can read about on Zenodo, I tested sonication, which basically uses sound energy to break up the cell nucleus and release DNA-bound proteins. It turns out that although sonication released more proteins from the cells, it may have disrupted protein-protein interactions or interfered with the purification of huntingtin, since less huntingtin and its interaction partners were recovered in the end. However, I could see that many more proteins interact with huntingtin upon treatment of the cells with oxidants. Those are the ones we want to identify!

After further fractionation optimization, and comparing notes with other lab members who are purifying huntingtin from cells, I settled on treatment with DNase (a protein that chews up DNA) to release the DNA-bound proteins from the cell.

In a crosslinking optimization experiment (also on Zenodo), I tested 2 different ways to link proteins together. One of the crosslinking chemicals (Lomant’s reagent) was quite finicky and came out of solution. The other (paraformaldehyde, or PFA) actually seemed to improve the cell fractionation process. Both chemicals helped keep the interacting proteins together. Since PFA gave better results, this is the crosslinking agent I’ll move forward with.

The scene is coming together, but there are a few more things to do to make sure we get the best/most informative samples of huntingtin and its interactors. These optimization steps have been done in different types of human cells, because they grow quickly and yield good amounts of protein. Skin cells from patients are not as easy to work with, but it’s time to test the system in those “fibroblasts”. That’s what I’ll be doing next. Stay tuned!

Support the Report

In April of 2017 the Canadian Government released the Fundamental Science Review report from a committee headed by David Naylor. This is the first time in decades any government in Canada has asked for an expert review of how we fund science, as these policies are historically political, and often in contradiction to expert opinion of how to best spend what few tax dollars Canada has ever dedicated to basic science research.

Our review was a highlight of failure, as a country we spend less than 0.5% of our health care spending on the CIHR annual budget, research that ironically impacts lower health care costs on Canada.  Per capita, this is 1/4 what the US spends (1/40th in total dollars), and less than all the G8. Yet despite this, for a tiny country that could fit into most US states, we have a bizarre number of agencies and programs duplicating administrative overhead costs, diverting funds from labs.

Most research grant dollars support people, well-trained people and young people who want to change the world and make life better for the sick. Somehow, priorities of historic Canadian governments have decided that the military is 15X more important for the future of Canadians than health research. As the United States enjoys a robust biotech and Pharmaceutical industry, owning 45% of the world’s pharmaceutical market, Canada’s pharma impact is mostly by outstanding scientist ex-repatriates doing research and development  in the US.

If we ever see the F-35 jets, keeping then parked in hangars every year will cost more than the entire CIHR budget per year.

We encourage The Prime Minster and Minister of Youth to reverse the historic apathy of support for Canadian Science and adopt the recommendations of the Naylor Report.

1% can make all the difference. #SupportTheReport

 

Master of Science

Congratulations to Susie Son on the successful defense of her Master’s thesis. She can now drink from the McMaster Chalice of Knowledge™.

susiedefense1

She’s off in September to Dental School at the University of Toronto.

Susie’s work has progressed to a manuscript in preparation on the interaction of huntingtin protein and HMGB1, a critical factor in DNA repair and autophagy control.

Son defense 2

Tam’s not here, she’s in Chicago at HDSA.