University Subjects

COMP2511: Object-Oriented Design & Programming

COMP2511: Object-Oriented Design & Programming

University
University of New South Wales
Subject Link
View Subject

Subject Reviews

fun_jirachi

2 years ago

Assessment
Assignment - 15%
Project - 35% (3 milestones over 4 weeks)
-- milestone 1 + 2 given two weeks, worth 17.5%
-- milestone 3 also given two weeks, worth 17.5%
Class Mark (Tutorials + Labs) - 10%
Final Exam - 40%
Assumed Knowledge
Prerequisite: COMP1531 AND (COMP2521 OR COMP1927)
Comments
The course was pretty okay for the first half - new raccoon (Refactoring Guru > Tom Nook), relatively tame assignment (albeit verbose and frustrating to work through, relatively easy to shrug off. I didn't like how long it was for the purpose it served; an intro to Object Oriented programming. It was easy but too long.), occasionally unnecessarily long lab tasks, etc. I could get past that, but then the course went to shit when the project happened. I'm going to also add that the labs that ran during the project were really long so learning stuff from the labs was lost on the students because of how bad the project ended up being. Retrospectively these only really served their purpose as study material for the exam, and as such the labs were often put out of context with the lectures at the time.

For context, we were told that the automarking process (which wasn't a thing in the previous offering of the course) was needed to ensure greater breadth in testing the correctness of students projects, which in turn awards fairer marks, particularly to those who completed more work. The only problem with these intentions (which I fully support, as they do make logical sense) was the execution was mindbogglingly poor, and the execution didn't achieve either of the objectives I've listed (correctness + fair marks) to varying extents, which will both be addressed below. There are also certain other reasons that I think are potentially partially responsible for the poor execution, but I won't go into those in depth because aren't as pertinent to the course itself as the following reasons. Just touching on them however I think is okay though -- it often felt like there could have been more hands-on support from course administration, especially when the course was in fact going awry but there wasn't for whatever reason (extra work, other commitments etc.). Nitpicking slightly, the announcements were sometimes inconsistent (ie. we won't give you X input / we won't test you on Y case, then those events actually happened, stuff like that).

But anyway, the main spiel:
From the start, the timeline that seemed to be employed should've rung alarm bells. Two weeks per milestone is not bad, though more time is preferable. But when the assignment is split like it is, and the second "half" of the assignment depends hugely on the first "half" (the whole point of the second bit is how well your design in the first adapts to new criteria. To quote the project specification: "60% of your [Milestone 3] automark will come from testing a completed interface which includes all the requirements in Milestone 2, and incorporation of the following new requirements (1.1 to 1.4).") it's imperative that students get feedback really quickly. There are two weeks between the two due dates, and as such two lab sessions. However, due to the structure of the course, we demonstrate our product to our tutors in the lab session immediately following the first due date and receive feedback in the next. Depending on when your session is (or if your tutor decides to give feedback outside lab time), the time remaining to act on that feedback for the final product may vary from anywhere between 4-7 days. This is particularly nitpicky but it certainly isn't the worst part, because that title is reserved for the various shenanigans that automarking created. I have no other words to describe automarking other than genuine shit because a) as stated before the execution was awful, b) the process to remedy this was equally if not more awful and c) the automarks (which genuinely could have been released earlier, unless for an even weirder reason the autotesting suite wasn't available before the automarks were released (this would point to admin unpreparedness)) were released really damn late ie. they were released 5 days from the milestone 3 deadline. This course already has an implicitly high workload attached to it, but these late results made us scramble harder (and unnecessarily so, IMHO, since it was in no way our fault), especially since not many of the errors the autotests raised for groups were particularly helpful in pointing out actual flaws in groups' programs. It was genuinely enraging at the time, and even in hindsight, and remaining somewhat level-headed it's impossible to describe it as anything other than a complete shocker. The flow-on effect of this late release and failure to accomplish the initial rationale set for automarking was that despite it being no fault of the students, students had close to no time to fix these non-errors in milestone 2 because of the looming milestone 3 due date. It became a dilemma between working on milestone 3, which relied on the "buggy" milestone 2, or maximising the previous marks and sacrificing milestone 3. For context, you would have been likely to fail other autotests in milestone 3 similar to those in milestone 2. In the end many groups had no choice but to go with the latter option because of the hanging threat.

Now, addressing the remarking process (ie. "b) the process to remedy this was equally if not more awful") -- the initial remark was slated to be returned on the Saturday before the Monday due date, IIRC, which to a student is absolutely outrageous. The amount of organisational disarray would have been ridiculous. We had no dry runs prior to the submission for Milestones 1 + 2 ie. nothing, even the most basic stuff just to ensure we wouldn't fail on technicality rather than incorrectness. This would have prevented a lot of the problems that arose. The official? reason for not providing a dry run was that it'd give away the testing suite, which seemed weird and remains so. A LOT of groups failed on dumb technicalities, and even a remark wouldn't have solved this because there were so many technicalities that a single remark may have solved one only for your group to uncover another. Despite this literally being in no way the students' fault, it was made out to be as if it was. We weren't allowed to "debug" -- but many groups just wanted to fix the technical errors as opposed to logic errors, ie. the ones that the autotests wouldn't facilitate, which weren't even wrong in the first place. In the end, dry runs were released for milestone 3 (any away from the actual testing suite would have been okay for milestone 2) but these ended up being provided two days after the automarks were actually released and were lacklustre at best. They were just the most basic reused milestone 2 tests.

Other issues related to remarking include but aren't limited to:
- The use of a marking cap to allow for small incremental errors/differences between the tests and groups' work, however, this initiative failed for multiple reasons; as stated elsewhere, because of how the autotests ended up running, one reason this failed is that this came off as an implication of a poor specification, rather than assumption variation. The autotests were capped at 80-90 which wasn't particularly helpful at first since a lot of groups initially got way lower than that. I will concede something below
- There was a remarking penalty for "non-atomic changes" which were often necessary for some groups because the set of changes classed as atomic was (somewhat) objectively narrow. This penalty was kept in place even after the shitshow this ended up being, which I personally thought was rather ridiculous (it wasn't even reduced, but I'd like to think it was adjusted slightly behind the scenes, despite the max 20% penalty still being a thing)

I will concede though, that this whole process would have been acceptable had the autotests worked as intended (with a provided dry run, of course) but as it didn't, it just made everything a whole lot worse. Another concession; you did get the highest mark of all the remarks, but this I think pales in comparison to how bad automarking ended up being.

The last point (ie. "a) as stated before the execution was awful"); the biggest problem here was that a lot of the project was open to interpretation, which a lot of the autotests did not factor in. While there was good breadth in testing, what they ended up doing was going into too much depth, thus by definition making assumptions which in many cases conflicted with the more than valid assumptions made by some students. We were told that we should make assumptions and were encouraged to do so where necessary, then we essentially got screwed for doing the exact thing we were told to do ie. basic errors not cleared up by the specification and were fair assumptions ie. no questions required on the forum were causing autotests to screw up, but we didn't know what these "errors" were. We were also told that the autotests would test "lower level / general stuff" and NO edge cases but this was in general not true (some tests fell under the general umbrella of "edge case", others tested higher level stuff where by definition students' interpretation comes into play). A phrase that I saw another student use that encapsulates this whole saga rather well is that "you're allowed to make assumptions, as long as they're also the ones we make", which is frankly ridiculous. If the specification and autotests needed X assumption to pass autotests, these should have been explicitly stated in every case, not just a select few (which I will give *some* credit for) and vaguely elsewhere. I also saw a student say something along the lines of "the project uses design by contract but essentially expects us to defensively program". It's just a shame because overall, autotesting is worth 14% of your OVERALL grade ie. for some rather extreme context, getting 0 for automarking in total can drop you from 100 almost down to a Distinction. It's even more of a shocker when the autotests didn't do their job properly, and even more so when you realise that autotesting was worth more than design in what is fundamentally a software design course (1.33x more, if I recall correctly).

An example of a really bad test that was actually given:
For context, we made a dungeon crawler game. A particular enemy can spawn and has a chance of spawning with armour. That chance is arbitrarily decided by your group. However, there was a test in the automarking suite you could fail if NONE of the first ten of that enemy spawned with armour ie. if you assumed this enemy had a 10% chance of spawning with armour, you'd fail this test roughly 1/3 of the time. This test was purely luck-based, and just statistically favours those who arbitrarily chose a higher chance of armour spawn. Now, this particular test wasn't worth a lot (given the number of tests in the testing suite), but when this sort of thing crops up multiple times across the testing suite, you can imagine the fury of the students. How this particular test was a good idea, I'll never know.

Other pertinent points:
- The response to criticism was passive and slow. Some feedback ran along the lines of "go read the spec", "don't worry about it", etc. There was also a 15m ish window where the course forum had temporarily disabled public posting/commenting, which seemed really strange given the timing (at the peak of the complaints and student anger). Even considering how long it took to get marks, it felt like it took longer to took forever to get responses and feedback on criticism of the automarking process. In short, lack of transparency, stability and communication
- I personally found it weird that no deadline extension was ever on the table (even though many students had made it clear that an extension wouldn't fix things in private circles). The only one afforded to us was the 5hr one for a 5hr GitLab outage in the first submission. I can guarantee that this ended up slowing students for a lot more than 5 hours, even though a deadline extension would have just extended the pain
- Groups with bigger issues that couldn't be resolved by a remarked automark received manual marking, but on a large scale, this was unfeasible. It felt really selective, and I can imagine that a) some groups may not have been bothered anymore and b) many had bigger issues. It would have been better to have executed this properly the first time given the problems that have existed in this course from previous offerings. Having success after manual marking just felt bittersweet; it felt really damn wrong to have to blunder through all this bureaucratic BS just to get correctly assessed.
- If code coverage was high enough, it's worth wondering if using each group's testing suite may have actually been fine, but that's a point for another time.

It's a shame because this course genuinely has potential; OOP as a concept is pretty interesting, but like many other courses (especially certain ones I've taken previously), off the mark administration ruins the student experience. I took two courses and was still occupied ie. a disproportionate workload. It's hard to believe I was considering taking another course at the start of term, and I couldn't be happier that I didn't after how this turned out. I should also reiterate that this is NOT in any way an attack on the course staff; they clearly had the right intentions and the right rationale for their changes. It just so happens that the final product was a devastatingly poor student experience. I might add; the project is worth 35% of your total grade, the labs are a portion of 10% but I have in fact taken more away from the labs given how panic-inducing this project has been; I've also never seen an effort vs marks ratio this disproportionate, even in some parts of HSC English.
Contact Hours
2 x 2hr lectures
1 x 3hr tutlab
Difficulty
2/5
Lecture Recordings?
Yes
Lecturer(s)
Ashesh Mahidadia
Notes / Materials Available

Slides and tutor notes, lab exercises
Overall Rating
-2/5 (adjusted from 0 pre-exam)
Post-Exam
Literally all the problems pre-exam were compounded. I went into the exam a bit more open-minded and hoping for improvement, which unfortunately never came. The exam itself was shocking. I would not be surprised if many people failed the 40% hurdle (raw marks, before any scaling).

I will give them the fact that the theory part of the exam was pretty smooth sailing, and well written. The programming questions just about summed up the whole term. The questions were too long, too hard and too verbose. Difficulty wise: literally none of the stuff we were told to prepare with (sample questions, lab questions, tutorial questions) could match up to this in the programming section. The prep was piss-easy, this was notoriously difficult. The prep absolutely paled, and the samples were largely irrelevant because we'd seen the questions as lab problems as well. In any case, I would imagine some if not most of the students who did the recommended preparation would have been 100% screwed, which speaks to the ridiculousness of the exam.

You basically had two choices: plan out your response or dive straight in. Either way, you'd encounter time drains; diving straight in meant you couldn't properly tackle the problem, which would have been evident for a course literally called Object-Oriented Design and Programming. Planning out your response would have taken too long (as it did for me, after which I panicked and ended up half-arsing a plan and a response), leaving you with not enough time to complete the exam. The sheer verbosity and length of the exam meant it was impossible to finish; I doubt the writers of the exam took it, nor even gave it to a tutor to try because this was just frankly ridiculous. Given six hours, twice the allocated time wouldn't have saved the majority of the cohort (and it would have extended the pain and confusion anyway), who were post-exam making jokes about "haha see you next year guys". If last term's exam was just "bad" (or so I have heard), I have no choice but to brand this one absolutely fucked. I have never taken an exam written worse, nor had an exam experience worse than this, EVER (regardless of if it was self-sabotage, as has happened before, or the fault of the people involved in running the exam). It's telling that I've enjoyed courses while not doing so well and will merit courses regardless of my mark, so I think for this offering of the course I'm being more than fair.

Again, this course absolutely has the potential to be a good course, but this offering has been nothing short of shocking. I really thought the automarking saga was rock bottom, but as it turns out there was an even rockier bottom underneath. I wanted to rant more, but I'm honestly so done with this particular offering of the course; I think the fact that a) I've bumped my course rating down to NEGATIVE two says enough, and b) "I have never taken an exam written worse, nor had an exam experience worse than this, EVER" says more than enough about a course already rated 0.
Textbook
Some suggestions for books that cover at least some of the topics in this course
Head First Design Patterns, by Elisabeth Freeman and Kathy Sierra, The State University of New Jersey
Refactoring: Improving the design of existing code, by Martin Fowler
Year & Trimester Of Completion
21T3
Your Mark / Grade
81 DN

Did you find this review helpful?

anomalous

4 years ago

Assessment
Due to the continuation of online delivery in 20T2, assessment priorities shifted to take weight off the final exam and to place it on practical work (i.e. the project and labs). The precise breakdown was
- 3x lab assessments, worth 20% of your course mark (well, kind of - read my comments)
- 1x individual programming assignment, worth 15% of your course mark
- 1x pair project, worth 35% of your course mark (try to organise a partner beforehand)
- final exam, worth 30% of your course mark
Assumed Knowledge
The formal prerequisites are
- COMP1531
- COMP2521 or COMP1927 (is there really anyone left who’s senior enough to have done 1927 but hasn’t done this course or the former 2911 yet, though?)

I might add that if you only recently completed 1531 (as in, within the last year), then the course may assume you’ve covered some domain modelling tools (most notably, UML diagrams) that you perhaps haven’t, so it is worth looking into this before term.
Comments
I did not have high hopes for this course going into it, expecting it to be similar in feel to 1531: a bit drab but it’s stuff you ultimately have to know as a CS/SEng student. I'm slightly skeptical of OOP, but the course content was more insightful than what was taught in 1531. The tutors and lab assistants were also great given what they had to put up with. These few points are about all I can say that was good, though.

If I had to sum up what was bad about this course this term, they just completely botched handling assessments, which was mostly the fault of poor planning and even poorer communication from the higher-up course staff. While I know quite a lot of vitriol was directed at the course staff during the term (some of it unsubstantiated and needlessly mean), I really do hope that they take the time to reflect on the genuine feedback they received, because there are many lessons to learn. If you’re the type who wants all of the precise details though, here’s a play-by-play account of the tragedy that was COMP2511 in 20T2 - if not, pick back up at the final paragraph for my final thoughts.

Our labs were made worth 20% of your overall mark, and these marks were distributed across 3 lab assessments scattered across 3 weeks during the term. The first of these assessments was a total disaster, on account of it being a ridiculous 6 minute interrogation format with your tutor/lab assistant in a Teams call asking questions scarcely related to the labs they were supposedly assessing. A lot of the grief this caused could’ve been avoided if there was clear communication telling students what exactly would happen, but they were oddly secretive and waited until a few hours before to even mention more details. Inconsistent messaging regarding whether you actually had to do labs to get marks plagued the other lab assessments too, with the third and final one being cancelled altogether and replaced by an automarked lab. That being said, the other lab assessments weren’t nearly as ridiculous as the first, and they were at least gracious enough to scale the result of that first lab assessment with respect to the others, considering that a lot of students performed very poorly in it.

The major project was also a huge nuisance and my least favourite part of the course by far. It started with not getting pair repositories until over 1 week after the release of the project. The first two milestones were also put too close together, due 1 week apart in week 7 and week 8 respectively. All of this culminated in the second milestone also being a bit of a hassle - we’re meant to have started working on it at the start of week 6, but we didn’t have repositories to allow us to do that properly until a bit into that week, and we were also meant to use the feedback we received from milestone 1, but you wouldn’t receive that until you demonstrated it to your tutor during your lab in week 7! The final milestone also involved a bit too much GUI work in my opinion, and felt much more like a graphic design exercise than OOP. Making extensions mandatory parts in order to get full marks in the milestone is a bit lame too. Perhaps it’s time to come up with a new project, since this one has been used for quite a while now (at least a year if I’m not mistaken).

The exam was a predictable mess. As absolutely no-one saw coming (/s), the load from students trying to submit caused WebCMS to wet itself at crunch time 10 minutes before the exam finished, leaving a lot of people in limbo and unable to submit their work. This isn’t the fault of the course staff as they aren’t responsible for the integrity of CSE’s infrastructure, but I think that going with a harder, lengthier exam with a 24 hour completion window would’ve been better in order to avoid concentrated load. It’s not as if this hasn’t happened before either - I heard one of the security courses last term caused some trouble with CSE’s infrastructure, and that was an even smaller course in terms of students enrolled.

The course is in major need of a return to form, or they need to try another way of dealing with online delivery for 20T3, because their plan this term didn’t work. I’m sure this was just an extreme reaction to having to deliver the course online, because there’d be no reason to do the things they did during physical delivery. Online courses are the meta right now though, and this course was a nightmare this term. It’s a shame that CSE students don’t have any choice but to do this course, especially while it’s in the state it is at the moment, but if you’re able to I would avoid doing it until 2021.
Contact Hours
- 2x 2 hour lectures
- 1x 1 hour tutorial, followed by 1x 2 hour lab
Difficulty
5/5 just because of how much of a time drain and organisational nightmare it was this particular term
Lecture Recordings?
Yes, screen and voice recorded.
Lecturer(s)
Dr. Ashesh Mahidadia
Notes / Materials Available
The course provides notes and lecture code, however there’s a number of online resources which prove to be invaluable for the more theory-oriented parts of the course - notably, Refactoring Guru as a single point of reference for the design patterns you learn in the course. Some slides in the course take resources directly from Refactoring Guru, so if you don’t feel like listening to lectures, why not cut out the middleman?
Overall Rating
0/5 1/5 this term - once again, take this with a grain of salt
(a 0 is pretty harsh on second thought, so I've been generous)
Textbook
No prescribed textbook, however Head First Design Patterns by Freeman and Sierra was recommended.

I can't comment on the usefulness of this book since I didn't use it, but some of the lecture examples were lifted straight from this book, so if you're struggling to grasp some of the patterns in lectures, you may be in luck.
Year & Trimester Of Completion
20T2
Your Mark / Grade
96 HD

Did you find this review helpful?

Opengangs

4 years ago

Assessment
- 3 lab assessments (worth 20% - this was to replace the regular lab marking for the 20T2 offering and may not reflect any future offerings).
- 1 assignment (worth 15%).
- 1 project (split into three milestones worth 7 + 11 + 17 marks - worth 35%).
- 1 final exam (worth 30%)
Comments
This is one of the few level 2 core courses, along with COMP2521, and it serves to enrich students in the field of software engineering. Students have to work in pairs to develop a project at the end of the term which is marked in Week 10. Like its level 1 counterpart COMP1531, students learn to collaborate with others and develop a habit of writing productive code. As I've worked in software engineering before, I was looking forward to the course. However, the management in this offering left me rather bitter about the course.

One of the major issues with the course this term was the lack of transparency between students and staff. With the staff's decision to completely change the grading system (from lab marking to lab assessments), it should have been clear what the expectations were with these new assessments. However, we received very little information about it until the day of the first rounds of assessments. I personally wanted a bit more clarity as to a) how the assessments would be handled, and b) what specifically are we meant to prepare because when it came to doing the lab assessments, it was not what I expected at all. When I think lab assessments, I think being marked on the contents of our labs rather than a timed pop quiz that required us to read 8pt font code from the tutor's screen.

Another huge issue was the untimely delivery of assignments and, more importantly, the project. The project was scheduled to be released at around Week 3/4 (as per the course outline) but the project specifications was released in late Week 5 which didn't give us a lot of time to work on the milestones (considering we had other assignments due around the same time). I remember having to pull a few all nighters to get the project done in time because of a lot of my other commitments that I had to meet during the term - it wasn't a pleasant experience. As a result, the project felt like a chore and no one wants that! In saying that though, I felt like the contents of the project was quite interesting and it taught me a lot of object-oriented principles which was helpful for the finals.
These two major issues alone was really what put me off COMP2511 because it felt like I had to do more work than was intended just to get through the term. The course dragged on for too long and I felt like I received very little return. I just hope they read the MyExperience and improve on this because they destroyed a perfectly good course this term. Shame.
Contact Hours
7 hours (2 x 2 hour lectures, 1 x 1 hour tutorial, 1 x 2 hour lab).
Difficulty
2/5
Lecture Recordings?
Yes.
Lecturer(s)
Dr. Ashesh Mahidadia
Notes / Materials Available
No required text, lecture notes were sufficient.
Overall Rating
1.5/5 (the 1.5 really comes from the 4 tutorial sessions that we had this term lol).
Pre-Requisite
COMP1531 and either COMP2521 or COMP1927.
Textbook
^
Year & Trimester Of Completion
20T2
Your Mark / Grade
89 HD

Did you find this review helpful?

RuiAce

4 years ago

Assessment
- 10% assignment
- 10% labs (some labs went for two weeks and were weighted more)
- 25% project (split into milestones of 5%, 8% and 12%)
- 55% final exam
Assumed Knowledge
COMP1531 and COMP2521
Comments

This is one of many Level 2 core courses to all computer science degrees.

Compared to its precursor course COMP1531, I personally found this course significantly more enjoyable. There were some digressions here and there, but for the most part this course it actually taught the concepts of OOP and OO design. Despite still hating design (and preferring more pure programming concepts), the design patterns just felt something aesthetic to use. They're really versatile and applying them appropriately is a challenge in itself. When used properly, they can make the code look much nicer. (MUCH fewer if/switch statements.)

Albeit, Java can be a pain to deal with because you have so many files to work with. But that's just made me come to appreciate IDEs more. (Still, I don't blame people who would rather avoid Java altogether.)

Design patterns made up probably 75%-80% of the final exam. You have to know ALL of the design patterns they teach you well. Code smells/Refactoring and Java made up the remaining 20-25%. (Javadoc, and the code smells lecture slides were given in the exam. And both Eclipse and Visual Studio Code. I was very appreciative of this.)

The exam does make you implement some of the design patterns. But to be honest, I found myself struggling more on understanding the Java skeleton code than actually doing the patterns.

All 1.5 points lost in the rating really came out of the project. It was so tedious. It was the only thing I kept complaining about having to do whilst taking this course. I was lucky that my partner was just better at me in it.
Contact Hours
2 x 2hr lectures, 2hr laboratory (somehow timetabling forgot to give us a tutorial this term)
Difficulty
4/5 (although I feel it'd be 2.5/5 without the project)
Lecture Recordings?
Yes
Lecturer(s)
Dr. Ashesh Mahidadia
Notes / Materials Available
Lecture slides were mostly a summarised version of this extremely helpful website. Lecture slides also taught some basic java. A sample paper was released, but it had more stuff about Java itself and refactoring than design patterns, at least compared to the exam.
Overall Rating
3.5/5
Textbook
N/A
Year & Trimester Of Completion
19t3
Your Mark / Grade
92 HD

Did you find this review helpful?

kierisuizahn

5 years ago

Assessment
Assumed Knowledge
Prerequisites:
Comments
I was expecting a re-run of COMP1531, but was pleasantly surprised when there was actually very little rote. Most of the course teaches design patterns, which are easy to remember if you implement them, and some of the refactoring techniques require rote, but overall the course was mostly about identifying the benefits of the design patterns, and gaining experience in applying them so you could identify what pattern would be suitable for certain problems. The project was really fun, though that might be because we went over the top, but was the best part of the course in my opinion. The lecture slides were somewhat disconnected, which made revision difficult having not attended lectures, but after writing out a summary of everything and organising it, the course was very easy to study for; the concepts taught in the course are really useful, and I would recommend this to anyone considering working in industry. A little bit dry at times, but overall, pretty good.
Contact Hours
2x 2hr Lecture, 1x 1hr Tutorial + 2hr Lab (Combined)
Difficulty
2/5
Lecture Recordings?
Yes - screen and voice recorded.
Lecturer(s)
Dr. Ashesh Mahidadia
Notes / Materials Available
Lecture slides all uploaded. Tutorials and labs with solutions posted. Sample final exam provided, with solutions.
Overall Rating
3.5/5
Textbook
Note: I don't use textbooks and can't comment on their usefulness. None prescribed, but useful references:
Year & Term Of Completion
2019 T1
Your Mark / Grade
99 HD

Did you find this review helpful?

Study Honours at the no.1 university in Australia

Open to students from all universities, Honours in Biomedical and Health Sciences builds on your bachelor’s degree in science or health and enables you to explore your interests in research. If you’re interested in pursuing a PhD or becoming a qualified health professional, then Honours is an ideal pathway.

Find out more